Published on

How to Write ChatGPT Prompts That Work: The PRSO Framework

Authors
  • avatar
    Name
    PromptShelf Editorial
    Twitter

A working professional has roughly the same problem with ChatGPT every time. The first answer is generic. The second is a tighter version of the first. By the third try you have either rewritten the prompt completely or given up and done the work yourself. The issue is almost never the model. The issue is the prompt.

This guide is the framework we use internally to write ChatGPT prompts that produce something usable on the first response. Four parts, in this order: Persona, Request, Specifics, Output. PRSO. We will walk through each part, rebuild a bad prompt into a good one in front of you, give you five worked examples covering email, meetings, 1:1s, code review, and onboarding, run one of those prompts on free ChatGPT and reproduce the actual response, then take a position on which popular prompt-engineering advice the framework deliberately rejects. By the end, you will know how to write ChatGPT prompts that work, and you will not need a list of 25 templates to do it.

Why most ChatGPT prompts fail (and how to write better ones)

The viral ChatGPT prompt list is built for screenshots, not for work. Most of them collapse to one of three failure modes.

The first is no role. The prompt asks ChatGPT to "write a follow-up email." A follow-up email written by a junior salesperson reads nothing like one written by a 20-year account executive. Without role context, the model picks the most average possible voice, and the average is what makes it sound like AI.

The second is no constraints. "Summarize this meeting" produces a summary that could fit any meeting. The reason it sounds generic is that the prompt never said who the summary is for, what they care about, or how long it should be. The model has no way to choose.

The third is no output spec. Even a well-crafted prompt with a clear role and good constraints will fail if it does not say what shape the answer should take. Markdown table? Three bullet points? A numbered list with one sentence per item? A 90-word paragraph? Without that, the model defaults to prose, and prose is rarely what you actually want to paste into the next thing on your list.

The fix is structural. Not "more detail." Structural. That is what the PRSO framework gives you.

The PRSO framework

Every prompt that consistently works for professional tasks has four parts. The order matters, because each part depends on the one above it.

LetterPartWhat it does
PPersonaSets who the AI is acting as and who you are. Picks the voice.
RRequestStates the verb and the specific deliverable. One thing, not three.
SSpecificsNames the constraints: audience, length, tone, references, edge cases.
OOutputNames the exact format the answer should arrive in.

You can write a PRSO prompt in two sentences for a small task or eight sentences for a complex one. The proportions stay roughly the same. Half the prompt is usually Specifics. The other three parts are short.

Here is the same idea in a single sentence as a sanity check. A working ChatGPT prompt names who is doing the work, what work, under what constraints, and in what shape. If your prompt is missing any of those four, the response will probably miss something too.

The next four sections cover each part in order. After that, we rebuild a bad prompt into a good one, then walk through five worked examples.

P. Persona

Persona answers two questions at once: who is the AI, and who are you?

Most prompt advice covers the first half. "You are a senior copywriter." "You are an experienced product manager." "You are a recruiter at a Series B startup." That part is straightforward, and it works. The reason it works is that the model has read enough copywriting, product management, and recruiting writing to imitate the voice convincingly when you point it in that direction.

The half people skip is who you are. Not your name, your role and your situation. "I am a first-time engineering manager promoted internally three months ago." "I am a 4th grade teacher in a Title I school with a class of 28." "I am a freelance designer pitching a long-time client on a rebrand." This context is what stops the response from defaulting to the median user of the platform, which is a person ChatGPT knows nothing specific about.

A workable persona section is two sentences. One for the AI, one for you. Anything longer is usually padding. Anything shorter is usually leaving useful context on the table.

A failure mode worth flagging: do not give the AI a creative or ironic persona for professional work. "You are a sarcastic Gen Z marketing director" gets you sarcasm, not marketing direction. Save the personality prompts for content; for work, default to "you are a senior X with Y years of experience."

R. Request

Request is the verb and the deliverable. One verb. One deliverable.

The most common mistake here is asking for two things at once. "Write a job description and three interview questions." "Summarize the meeting and tell me what to do next." "Critique my essay and rewrite it." Each of these is two prompts pretending to be one, and the model will produce a half-version of each because it cannot fully attend to both.

Split them. Send the second prompt as a follow-up in the same chat. The model has the context already.

The verb itself matters. "Write" is broad. "Draft" implies a first pass. "Critique" implies pointing out problems without fixing them. "Rewrite" implies doing the fix. "Outline" implies structure without prose. "List" implies items, not paragraphs. The verb you pick shapes the entire response. Pick deliberately.

The deliverable should be concrete. Not "an email." A 120-word reply email. Not "feedback." Three concrete suggestions. Not "a strategy." A single recommendation with one paragraph of reasoning. The narrower the deliverable, the higher the hit rate.

S. Specifics

Specifics is where most of the prompt goes, because most of the work is here. The five questions to answer:

Who is the audience? Not you. The person who will read or use the output. A note to your manager reads differently than a note to a customer. A snippet for an internal wiki reads differently than one for a public blog post. Name the audience explicitly. "For a non-technical CEO." "For a junior teammate who started last week." "For a parent of a 9-year-old."

What length? A word count, a sentence count, or a section count. Without this the model picks the length, and its default is usually 1.5 times what you wanted. A specific number is the easiest constraint in the entire framework, and it solves more length-drift problems than anything else.

What tone? Plain professional, friendly, formal, conversational, blunt. If two adjectives capture it better than one, use two: "warm but direct." If you have a writing sample that captures the tone, paste it as a reference and tell the model to match it.

What constraints or references should it use? "Cite only data you can verify." "Do not invent statistics." "Use only the seven bullet points I gave above." "Avoid the word 'leverage.'" These are the rails that keep the response inside the runway.

What is out of scope? Often more useful than what is in scope. "Do not give me action items, just summarize." "Do not suggest tools I should buy." "Do not include a closing call to action."

You will not need all five every time. You usually need three or four.

O. Output

Output is the shape of the answer. The most common formats for working professionals:

A numbered list. A bulleted list. A markdown table with named columns. A short paragraph followed by a bulleted list. A response in the style of an email with subject line and signature. A markdown document with H2 and H3 headings. A code block. A JSON object with named keys.

Be specific. "Three bullets, each one sentence, each starting with a verb" produces a different artifact than "a numbered list with explanations." If you want a markdown table, name the columns. If you want sections, name the section headers.

The Output spec is also where you lock in revision-friendliness. If you ask for a numbered list, you can ask for "rewrite item 3 to be sharper" later and the model knows what you mean. If you asked for prose, you cannot point to anything specific. Structured outputs compound across follow-up prompts. Prose outputs do not.

Two short sentences for Output is usually enough. One naming the format, one naming the structure inside the format.

Rebuild: from a bad prompt to a PRSO prompt

This is the same task, written four ways. Each version adds one PRSO part.

Version 1. No framework.

Prompt: "Write a follow-up email."

You will get a generic four-paragraph email that fits no situation. This is the prompt most professionals start with, and it is why most professionals decide ChatGPT is not useful for email.

Version 2. Persona only.

Prompt: "You are a senior account executive at a B2B SaaS company. I am the AE on this deal. Write a follow-up email to a prospect who went quiet after the demo."

Better. The voice is more confident. But the email is still a Tuesday-best version of every follow-up email ever written. No specifics about the prospect, the deal, or what the next step is. The model is averaging across thousands of similar emails because that is all it has to go on.

Version 3. Persona plus Request and Specifics.

Prompt: "You are a senior account executive at a B2B SaaS company. I am the AE on this deal. Draft a follow-up email to a prospect who went quiet after a product demo two weeks ago. The deal is mid-five-figures ARR. The prospect is the VP of Operations at a 200-person logistics company. The original objection was integration complexity. Keep the email to 110 words. Tone is warm but direct, not chasing. Do not include scheduling links or a closing call to action that asks them to 'jump on a call.' Reference one specific thing about their integration concern."

Now the email is doing real work. The voice is right, the audience is named, the length is bounded, the constraint about "jump on a call" rules out the most common AI sales-email tic.

Version 4. Persona, Request, Specifics, Output.

Prompt: "You are a senior account executive at a B2B SaaS company. I am the AE on this deal. Draft a follow-up email to a prospect who went quiet after a product demo two weeks ago. The deal is mid-five-figures ARR. The prospect is the VP of Operations at a 200-person logistics company. The original objection was integration complexity. Keep the email to 110 words. Tone is warm but direct, not chasing. Do not include scheduling links or a closing call to action that asks them to 'jump on a call.' Reference one specific thing about their integration concern. Format the response as: subject line on its own line, then the email body, then a one-sentence editor's note pointing out the strongest sentence in the email."

Adding the Output spec turns the response into something you can immediately use. The subject line is broken out. The body is ready to paste. The editor's note is a small bonus that helps you decide if the email is actually good before sending it.

Four versions, same task. Version 1 is unusable. Version 4 is a finished artifact. Nothing in between is magic, just structure.

Five worked PRSO examples

These cover the situations professionals actually use ChatGPT for. Each one is built using the framework above. Copy them, replace the bracketed pieces, and run.

1. The difficult email reply

Prompt: "You are a senior manager with 12 years of leadership experience and a reputation for direct, fair feedback. I am the recipient of an email from [describe the sender and their role, e.g., 'a peer in another department'] who has [describe what they did, e.g., 'publicly questioned my team's priorities in a Slack channel without talking to me first']. Draft my reply. Length: 90-120 words. Tone: calm, direct, not defensive. Open by acknowledging one thing they said that is fair. Do not concede anything you actually disagree with. End with one specific question that moves the conversation to a 1:1. Format: a subject line, then the email body, then a single sentence below labeled 'Why this works' that names the most important move in the email."

Use this for any reply where you want to stay composed and not give up ground you should not give up.

2. The post-meeting summary for stakeholders

Prompt: "You are a senior chief of staff. I just left a [describe the meeting, e.g., '45-minute leadership team meeting on Q3 hiring priorities']. The audience for the summary is [describe stakeholders, e.g., 'the CEO and CFO who were not in the room']. Below is my raw notes: [paste notes]. Write a stakeholder summary. Length: 180 words maximum. Format: one short opening paragraph (no more than 30 words) stating what got decided, then a markdown table with three columns (Decision, Owner, Deadline), then a 'Risks' section as 2-3 bullets, then nothing else. No 'Next steps' section, no flowery language, no thanking anyone."

This converts messy notes into something a busy executive will actually read.

3. The 1:1 agenda for a manager you do not know well

Prompt: "You are an experienced executive coach. I am [describe role and seniority, e.g., 'a new senior IC promoted from a different team']. I have my first 1:1 next week with [describe the manager, e.g., 'my new manager, who has a reputation for being terse but technically excellent']. The relationship has 0 history. Draft an agenda. Length: 5 items maximum. Each item: a short label and a one-sentence reason it is on the agenda. Open with a question that surfaces what they want from me, not what I want from them. Do not include a 'wins from last week' item; we have no last week. Format as a numbered list."

This is the prompt that makes a first 1:1 productive instead of awkward.

4. The code review explanation for a non-engineer

Prompt: "You are a senior software engineer who is unusually good at explaining technical decisions to non-engineers. The audience is [describe, e.g., 'a non-technical product manager who needs to understand the tradeoff to decide on the deadline']. Below is the code change: [paste diff or description]. Explain what the change does and why it matters. Length: 4 short paragraphs. No code in the explanation, only plain language. End with a single sentence that states the one tradeoff the PM should know about. Tone: confident, plain, no jargon. If you must use a technical term, define it the first time you use it."

This is the prompt that ends the "can someone explain this to me in normal English" thread.

5. The first 90-day plan for a senior new hire

This is the prompt we will test on ChatGPT below.

Prompt: "You are a senior operations manager who has run onboarding for new hires at three different companies. I am the new hire: [describe role and seniority, e.g., 'a senior product manager joining a 60-person B2B SaaS company that has had no PM for 18 months, reporting to the CEO']. Draft my first 90-day plan in three phases: days 1-30 (learn), days 31-60 (diagnose), days 61-90 (commit). For each phase give 3 specific outcomes I should deliver, 3 concrete activities that produce those outcomes, and one mistake first-time PMs make in that phase. Format as a markdown table with columns: Phase, Outcomes, Activities, Common mistake. Plain language, no jargon."

This is the prompt every senior hire wishes someone had written for them. PRSO compliant. Persona names both sides, Request is a single deliverable (a 90-day plan), Specifics name the phases, the audience, the structure, the tone, and the formatting, and Output names the table columns explicitly.

What this prompt actually returns (tested 2026-05-02)

We ran Prompt 5 on free ChatGPT with the example brief baked in ('a senior product manager joining a 60-person B2B SaaS company that has had no PM for 18 months, reporting to the CEO'). Reproduced verbatim:

PhaseOutcomesActivitiesCommon mistake
Days 1–30 (Learn)1) Clear map of the product: what exists, what's used, what's broken.
2) Shared understanding of company goals and how the product supports them.
3) Strong relationships with key people (CEO, engineering, sales, support).
1) Do 12–15 short interviews: CEO, each team lead, 3–5 customers, and 3 recent lost deals; write a one-page summary of patterns.
2) Use the product like a customer; note friction points and gaps.
3) Sit in on sales calls, support tickets, and demos to hear real language and pain points.
Trying to "fix" things too early instead of building context; making calls without enough input.
Days 31–60 (Diagnose)1) List of the top 5 problems worth solving, backed by evidence.
2) Simple view of current work vs. impact (what's worth continuing, stopping, or starting).
3) Draft product direction for the next 6–12 months.
1) Review usage data, churn reasons, win/loss notes; connect them to the problems you heard.
2) Run a workshop with engineering + sales to validate and rank problems.
3) Write a short "product direction" doc and review it with the CEO and team leads.
Overcomplicating the analysis or trying to be perfect; delaying decisions waiting for more data.
Days 61–90 (Commit)1) Clear 3–6 month roadmap with priorities and trade-offs.
2) Agreement from CEO and team on what will (and won't) be worked on.
3) First visible product improvements shipped or in progress.
1) Turn the direction into a simple roadmap with 3–5 priorities and expected outcomes.
2) Align with engineering on scope and timelines; cut work that doesn't support priorities.
3) Start delivery on 1–2 high-impact items and communicate progress weekly.
Trying to please everyone by saying yes to too many things; avoiding clear trade-offs.

Editor's notes. Four things worth flagging. First, the table is structurally clean: the four-column shape held, the three outcomes plus three activities plus one mistake structure was respected in every row, and the phase labels are intact. That is the PRSO Output spec doing its job; without naming the columns explicitly, you would have gotten a wall of bullet points instead. Second, the "Common mistake" column does the most work in the response. "Trying to 'fix' things too early" and "trying to please everyone by saying yes to too many things" are the two failure modes a senior new hire actually runs into, and naming them up front is more useful than the outcome lists. Third, the response is plain and a little under-specified in places: "12–15 short interviews" is a real number, but "review usage data" is the kind of activity that needs a specific tool or query before it becomes actionable. Fourth, the model added one extra constraint that was not in the prompt ("3 recent lost deals" inside the activities cell) which is a genuine improvement; that is the model behaving like a senior operations manager would, not like a junior assistant. As a starting structure for a 90-day plan, this is good. You would still need to substitute the actual customer names, the actual data sources, and the actual lost deals from your situation before this becomes a plan you can run.

What PRSO deliberately rejects (and why)

Most popular prompt-engineering advice contradicts at least one part of PRSO. We are taking a position on each of these, in the open, so you can decide for yourself.

We reject "act as" framing as a complete persona. "Act as a senior product manager" is the most common way prompt guides recommend setting up a role. We do not write our personas this way, because "act as" is a single instruction that the model can drop the moment the conversation gets long. "You are a senior product manager. I am a junior PM you are reviewing a launch plan for" stays in place across follow-ups because both sides of the relationship are stated. Persona is a relationship, not a costume.

We reject "be detailed" as a constraint. "Be detailed and thorough" is on every prompt-engineering tip list. It produces longer responses, not better ones. Length is not the same as quality. PRSO replaces "be detailed" with concrete length and format constraints, because the difference between a useful 200-word answer and a bloated 600-word answer is rarely about depth. It is about whether the model knew when to stop.

We reject "step by step" as a default. Chain-of-thought prompting is genuinely useful for math and logic problems. It is overused for writing tasks, where the model's "thinking out loud" is just padding. Output spec replaces "think step by step" with "format your final answer as X." If you want the reasoning, ask for it as a separate output line at the end. Otherwise, you are paying for filler tokens.

We reject the giant prompt template. Some popular guides recommend 600-word prompt templates with eight named sections. They produce technically correct responses that read like form letters. PRSO is deliberately compact because the model attends to short, structured prompts more reliably than long, fragmented ones. If your prompt is over 250 words, the bottleneck is probably not under-specification.

We reject "you are an expert" without specifics. Calling the model an expert is the laziest possible persona. "You are an expert in marketing" tells the model nothing it does not already assume. Replace it with a specific kind of expert: "a B2B demand generation lead at a Series B SaaS startup." Specificity is what makes the response sound like a real practitioner.

The framework above is shaped by these positions. Other writers will tell you different. We have run hundreds of prompts on professional tasks; the patterns that consistently produce something usable on the first response are the ones we kept.

Common pitfalls and how to fix them

A few patterns show up across professional users new to PRSO.

Pitfall: stuffing the persona with adjectives. "You are an empathetic, dynamic, results-driven, world-class senior operations leader" produces worse output than "You are a senior operations manager with 15 years of experience." Adjective stuffing pulls the model toward marketing copy, not professional voice. Stick to two or three concrete attributes.

Pitfall: hiding the request inside the specifics. "I want a 200-word summary that is calm and clear and includes risks and is for the CEO and..." The request gets buried. Lead with the verb and deliverable, then break out specifics on a new line or as a bulleted list inside the prompt.

Pitfall: vague output specs. "Format nicely" means nothing. "Format as a markdown table with columns Phase, Outcome, Owner" means something. The more concrete the format, the more usable the response.

Pitfall: constraint inflation. Eight constraints is too many. The model starts dropping some, and you cannot tell which ones. Pick the three that matter most for this specific prompt. If a constraint is "always-on" for your work, save it as a custom instruction in your ChatGPT settings instead of repeating it every time.

Pitfall: writing the prompt only once. A PRSO prompt is a draft. After you see the first response, the prompt usually needs one tweak: a tighter constraint, a different output format, or a sharper persona. Treat the first response as feedback on the prompt, not a final answer.

FAQ

How long should a ChatGPT prompt be?

A working PRSO prompt is usually between 80 and 250 words. Short tasks (a tweet, a one-line subject) can be 60 words. Complex tasks (a multi-section document, a structured analysis) can stretch to 400. If your prompt is over 500 words, you are probably trying to do two prompts at once. Split.

Do longer prompts always produce better answers?

No. They produce more constrained answers. If the constraints match the task, the answer gets sharper. If you pile on constraints that contradict each other ("be thorough but use only 50 words"), the model picks one and ignores the other. Make every constraint earn its place.

Should I write the prompt in English even if my audience is not English-speaking?

Write the prompt in the language you think most clearly in, then specify the output language. "Reply in French." This produces better output than writing the prompt itself in a language you are less fluent in, because your specificity matters more than the model's language sensitivity.

Does PRSO work for prompts that are not for ChatGPT?

Yes. The framework works on Claude, Gemini, and most other large language models without changes. The phrasing tweaks slightly (Claude responds well to XML tags inside the prompt, Gemini is slightly more literal about output format), but the four-part structure is the same. See our ChatGPT vs Claude comparison for the side-by-side test we ran.

How do I know if my prompt is good before I run it?

Read it back to yourself and ask the four PRSO questions. Who is the AI and who am I? What is the deliverable? What constraints? What format? If you can answer all four in one sentence each, the prompt will probably work. If you stall on any of them, that is the part to rewrite.

Bookmark this and try one prompt today

Pick one of the five worked examples that matches a task you actually have on your list this week. Copy it. Replace the bracketed pieces with your real situation. Run it. Note where the response is right and where it is off. Tweak the constraints. Run it again. That second run is usually the one you keep.

The PRSO framework is not a memorization exercise. It is the four questions you ask yourself before you press enter. Once you have asked them five or ten times, they stop being a checklist and become how you write prompts. That is the whole point.

If a worked example above is close to your job but not quite, the listicles below are 25 prompts each, each one written in PRSO style, organized by category.