- Published on
25 ChatGPT Prompts for Journalists and Reporters (Angles, Interviews, Headlines)
- Authors

- Name
- PromptShelf Editorial
Read this first. ChatGPT is a writing tool. It is not a source, not a wire service, and not a fact-checker. It will fabricate quotes, attribute things to people who never said them, invent named experts, hallucinate court cases, and confidently get dates wrong. Every prompt in this post is for shaping your own work: angles, interview prep, headline drafts, FOIA letters, beat tip-sheets. None of them produce reportable facts. Verify every name, number, quote, and citation against primary sources before publication. Treat the model's output the way you'd treat a junior intern's first draft: useful starting material, never the final word.
A working reporter's day looks like five tabs of background reading, three deadline pieces, an interview at 2pm, and a Slack channel of editors asking when the kicker is going up. ChatGPT does not solve any of that. What it can do is shave 20 minutes off the parts of the job that aren't actually reporting: framing a pitch, drafting ten headline options, sketching follow-up questions before a hostile interview, turning a 90-minute transcript into a clean Q&A skeleton.
This post gives you 25 ChatGPT prompts for journalists, organised by where they fit in a reporting week. They are designed for newsroom staff, freelancers, beat reporters, and student journalists. None of them ask the model to do reporting. They ask it to do drafting work that real journalists already do, just faster.
We tested one of them, Prompt 16, on free ChatGPT and reproduced the actual response further down so you can see what comes back before you wire any of these into your workflow.
What NOT to ask ChatGPT (read before any prompt)
Five things ChatGPT should never do for a journalist:
Generate quotes, attribute statements, or "find" sources. The model will produce plausible-looking quotes from real-sounding people who do not exist or never said the thing. Two examples in 2024 alone: a UK newspaper had to retract a piece that quoted a fabricated expert; a wire-service stringer was suspended after filing a story with a hallucinated councilmember statement. If a quote needs to exist, you call someone.
Confirm facts, dates, statistics, or rulings. ChatGPT's training cutoff is months or years old and it does not browse the live web reliably. Anything date-specific, vote-count-specific, or case-citation-specific should be confirmed against primary sources (court records, official filings, the actual person, the published study). Treat any number or date the model offers as a hypothesis to verify.
Process privileged source material on the free tier. Inputs to free ChatGPT may be retained and used for training. Do not paste leaked documents, sealed filings, off-the-record interview transcripts, raw audio of confidential sources, or anything you wouldn't want a third party reading. Use placeholders ([Source], [Document], [Quote]) and stitch real material in locally.
Write the lead or the kicker on a story you haven't reported. The model will happily generate a polished lead about a story it knows nothing about, populated with invented stakes and invented detail. That's how fabrications make it into print. The model can rewrite a lead you wrote based on reporting you did. It cannot generate one from a one-line description and a vibe.
Produce balanced framing on adversarial stories without your judgment. ChatGPT defaults to a centrist "both sides have a point" register that is often inappropriate for accountability journalism. If you're reporting on a documented harm, the model's instinct will be to soften it. Override that instinct in the prompt or rewrite the output.
The 25 prompts below assume you already know all of this and will treat the output accordingly.
How to use these prompts
Each prompt has a role line, a task, constraints, and an output spec. Substitute the bracketed parts with your actual story, source, beat, or transcript. If a prompt asks you to paste in interview material, redact names of confidential sources first. After running, treat the response as a draft and edit on top of it. Never copy and publish.
Story development and angles
1. Five angles on a beat story
Prompt: "You are a senior assignment editor at a metropolitan daily. I'm covering [topic]. Here are the known facts: [bullet list of 4-8 facts you have, including who/what/when/where/why and any tension points]. List five distinct story angles I could pursue, ordered from most-likely-to-already-be-covered to least. For each: a one-line angle statement, the key stakeholder I'd need to reach, and the single most important question that angle answers. Skip angles that require speculation or unnamed sources."
Use this when an editor hands you a topic and you need to find the actual story before the next news meeting. The "ordered by saturation" framing forces the model to think about competitive coverage, not just generate variations.
2. Pitch refinement
Prompt: "Act as a freelance editor at [publication]. I have a half-formed pitch: [your one-paragraph idea, including who's affected, what's new, why now, and any reporting you've already done]. Rewrite this as a 150-word pitch in three paragraphs: the hook (what's specifically new), the stakes (who is affected and how), and the reporting plan (what I'll do and roughly when I can file). Be skeptical of weak claims and tell me what's missing before I send it."
The "tell me what's missing" line is the work. A pitch that survives a tough editor is a pitch worth filing.
3. Stakeholder map
Prompt: "You are a beat reporter. For the story
[one-sentence story description], list the eight to twelve stakeholders who matter. For each: their role, their likely public position, the question only they can answer, and a one-line reason an editor might cut them from the final piece. Group them as: must-quote, should-quote-if-time, useful-for-context-only."
Run this before booking interviews. It catches the source you forgot existed (the labor union rep, the procurement officer, the displaced resident).
4. Counter-narrative check
Prompt: "Act as the strongest, most honest critic of my draft thesis. My thesis is:
[your one-sentence story claim]. List the four most credible counter-arguments. For each: who would make it, the strongest version of their reasoning, and the specific reporting question I'd need to ask to test it before publication. No straw men. Steelman everything."
This catches the angle your editor will catch in review. Better to find it before filing.
5. Local trend story sketch
Prompt: "You are a regional reporter. There's a national trend:
[one-sentence trend, e.g., 'school districts adopting 4-day weeks']. My beat is[geographic area]. Sketch a 600-word local-angle story I could report in three days. Specify: the local data point I'd need to confirm the trend is happening here, two on-the-ground sources I'd interview, and the single quote that would make the story feel local rather than wire-rewritten."
National-trend-with-local-spin is bread and butter for regional reporters. The model is decent at the structure, weak at the data point, so you confirm both.
Interview prep and questioning
6. Source dossier
Prompt: "Act as a beat reporter preparing for an interview. The source is
[name, role, organisation]. My interview goal is[what I need from them in one sentence]. Sketch a one-page dossier with: their public record on this issue (only things you're confident are true; flag anything you're uncertain about), their likely talking points, the topics they'll try to redirect to, and three questions where their answer will tell me whether they're being candid or in PR mode."
The flag-uncertainty instruction is critical. Without it, the model will pad the dossier with confident fabrications.
7. Open-ended questions for a reluctant source
Prompt: "You are an experienced print interviewer. My source is
[role, e.g., 'mid-level civil servant who saw something but is nervous about retaliation']. I have 30 minutes. Write 12 open-ended questions that get them talking without making them feel cornered. Order them: warmup, scene-setting, story-of-the-event, why-they're-talking-now, contact-info-for-others. No yes-or-no questions. No questions that assume facts not in evidence."
The structure (warmup → arc → next-source) reflects how nervous-source interviews actually go. The constraint (no assumed facts) keeps you from leading the witness.
8. Follow-up generator for a contentious interview
Prompt: "Act as a copy desk veteran reading my notes. Here's the on-record exchange so far:
[paste the relevant transcript chunk, redacted of any off-record material]. The source has just said[their last quote]. Give me five follow-up questions that would close obvious holes in what they just claimed. Order by how much each would discomfort them. Skip gotcha framing."
Useful in real time on a long interview. The "order by discomfort" is your call on whether to ask them.
9. Hostile interview prep
Prompt: "You are a media trainer who has prepped CEOs for hostile press. My interview is with
[name, role]about[topic with the angle that puts them on defense]. I have 15 minutes. List the five most likely deflection moves they'll use and a counter-question that returns the line of questioning to the actual issue. Format as a two-column table: deflection / counter-question."
Helpful for the rare confrontational interview. Run it the morning of, not the day before, so the responses stay fresh in your head.
10. Transcript to clean Q&A
Prompt: "Act as a transcriber and a light editor. I'll paste a 60-90 minute interview transcript:
[paste; remove all confidential or off-record material first]. Output a Q&A version: my questions in bold, source's answers below them, light cleanup of filler words and false starts only. Do NOT paraphrase. Do NOT change any factual content. If a sentence is unclear, leave it as-is and flag it with[unclear in audio]. At the end, list any moments where the source seemed to evade the question."
The "do not paraphrase" line is binding. Editorial judgment about what to cut comes from you, not the model.
Research, FOIA and background
11. FOIA letter draft
Prompt: "Act as a public-records specialist. I need to file a
[FOIA / state public-records / FOI]request to[agency]. I'm seeking:[describe records in plain language, the date range, the format you want them in]. Draft the letter, including: legal basis citation (use a placeholder like[CITE]rather than guessing the statute), specific records requested, format and delivery preference, fee-waiver request and reasoning, contact info placeholder. Keep it under 350 words."
The placeholder for legal citation is the safety valve. ChatGPT will hallucinate the wrong statute number every time you let it. You fill that in from the actual records office's guidance.
12. Background reading list
Prompt: "You are a research librarian. I'm getting up to speed on
[topic]for a story. Suggest a reading sequence: three pieces that explain the issue's history, two pieces that establish current state of play, and one academic or government source for primary data. For each suggestion: title-and-author or organisation only (do NOT invent URLs or DOIs); a one-line reason it matters; and a flag if you're uncertain it exists."
The "do not invent URLs" instruction is non-negotiable. The model will absolutely make up convincing dead links if you let it.
13. Stakeholder reading map
Prompt: "Act as an opposition researcher. For story topic
[topic], who has written what? List the five outlets or writers most associated with each major position on this issue. For each: their position in one line, the strongest published piece making that case (title and outlet only, no URLs), and what they typically miss. If you're uncertain a piece exists, label it[verify]."
Run before you start drafting so you know whose work you're pushing back against, agreeing with, or building on.
14. Document review framework
Prompt: "You are a document-driven investigative reporter. I have
[describe the document set: e.g., '300 pages of city council emails from 2024 obtained via public records']. My story hypothesis is[one sentence]. Give me a five-step review framework: what to look for first, second, third; the names and dates I should track; the smoking-gun pattern that would confirm the hypothesis; and the pattern that would falsify it. Skip steps that require legal training."
The "what would falsify it" question is the one most reporters skip. The model is competent at structuring it.
15. Beat glossary
Prompt: "You are a copy editor who has worked the
[beat]desk for ten years. List 15 acronyms or jargon terms a generalist reporter joining this beat needs to recognise within a week. For each: the term, the plain-English meaning in one sentence, and the most common reporter mistake when using it (e.g., conflation with a similar term, wrong scope). Format as a table."
Useful when an editor reassigns you to a new beat on Tuesday and you have a story on Friday.
Writing, headlines, and leads
16. Ten headline candidates
Prompt: "You are a copy chief at a metro daily. The story angle is:
[one or two sentences with the actual 5Ws and any tension or conflict]. Generate ten headline candidates. Constraints: each under 70 characters, no clickbait phrasing, no questions, no colons unless the second half is concrete (no 'What it means'), no puns unless they earn the space. After the list, briefly say which two you'd put on the print front and which two you'd put on web, and why they differ."
This is the prompt we tested below. Headlines are where the model is genuinely useful: it'll generate volume fast, you pick.
What this prompt actually returns (tested 2026-05-09)
We ran Prompt 16 on free ChatGPT with this story angle baked in: 'A small city's water utility raised rates 40% effective July 1 to fund a $12M pipe replacement after three boil-water notices in 2025; council vote was 4-3; lower-income residents will see average bill jump from $48 to $67; utility says it's still below state median; advocates say there should be a tiered rate.'
Reproduced verbatim:
- Small City Water Rates Rise 40% After Pipe Failures
- Water Bills to Jump 40% Under Pipe Replacement Plan
- Council Approves 40% Water Rate Increase in 4-3 Vote
- Water Utility Raises Rates to Fund $12M Pipe Overhaul
- Boil-Water Notices Spur 40% Increase in Water Bills
- Residents Face Higher Water Bills After Utility Vote
- Water Bills Climb From 67 Starting July 1
- Split Council Vote Advances $12M Water Pipe Project
- Water Rate Hike Aimed at Preventing More Service Alerts
- Advocates Push Tiered Billing After 40% Rate Increase
Print front picks
- Water Bills to Jump 40% Under Pipe Replacement Plan
- Council Approves 40% Water Rate Increase in 4-3 Vote
These work well in print because they are direct, compact, and foreground the two strongest news elements: the size of the increase and the divided council vote. They scan quickly on a crowded front page.
Web picks
- Boil-Water Notices Spur 40% Increase in Water Bills
- Water Bills Climb From 67 Starting July 1
For web, specificity and search value matter more. "Boil-water notices" adds context that can match reader searches and social interest, while the dollar figures give readers an immediate sense of personal impact.
Editor's notes. The model held the under-70-character constraint cleanly across all ten and produced no clickbait, no questions, no colons, no puns, which is the part of this task it does well. The weaker moves: #4 calls the project an "overhaul" when the brief says "replacement", a small but real inaccuracy a copy desk would mark; #9 mis-frames the rate hike as "aimed at preventing service alerts" when the actual purpose per the brief is the pipe replacement (preventing future notices is a downstream effect, not the lede). The model also missed the equity angle entirely: not one of the ten foregrounds that lower-income households take the bigger hit, which a regional desk would absolutely surface. The print-vs-web reasoning is generic and doesn't say anything you didn't already know. What's usable: #2, #3, and #7 are honest options to put in front of an editor; the rest are fine to scroll past. The reporter's job here is volume-with-curation, which is what the prompt is good for.
17. Lead rewrite
Prompt: "Act as an editor who hates flabby leads. Here's my draft lead:
[paste 1-3 sentences]. Rewrite it three ways: (a) tightest possible factual lead, no adjectives; (b) scene-driven lead that opens with one specific image from my reporting (I'll fill in the image); (c) anecdote-led variant that opens with a single named source (I'll fill in the source). Each under 40 words. After the three, tell me which one fits best for a[publication-type, e.g., 'metro daily front page' / 'longform monthly' / 'specialist trade pub']and why."
Use this once you've got the reporting in. The model can't write the lead from nothing; given a real draft, it's a useful sparring partner.
18. Quote selection
Prompt: "Act as a senior writer. I have
[N]quotes from my reporting on[topic]. Here they are:[paste with attribution]. The story is[approximate length]words. Tell me: which one belongs in the lead or first three paragraphs (and why); which one anchors the kicker; which two are redundant with each other so I can cut one; and which one is doing the most work for context but might get scissored by a copy editor."
A reporter knows their best quotes. The model is decent at spotting the redundancy and the unsung context quote.
19. Transition rewrite
Prompt: "Act as a line editor. Here are the last sentence of one paragraph and the first sentence of the next:
[paste both]. The transition feels choppy. Rewrite the second sentence three ways so the connective tissue is invisible. No 'However', no 'Meanwhile', no 'In addition'. Do NOT change the content of the sentence; only its opening shape and rhythm."
Small surgery on a draft that's already 90% there. Run on the 2-3 stutters you flagged in your own re-read.
20. Headline kicker test
Prompt: "You are a reader who clicked on this headline:
[paste headline]. The actual story kicker is[paste your kicker, the final image/quote/scene]. Does the headline set up the kicker, undercut it, or feel disconnected? Give me a one-sentence verdict and, if disconnected, propose a sharper headline that earns the kicker."
A specific, narrow check. Saves the moment in copy review where someone says "the headline doesn't quite track the ending."
Beat management, social, and post-publish
21. Newsletter blurb
Prompt: "You are the newsletter writer at
[publication]. My story just published:[paste headline, dek, and first three paragraphs]. Write three blurb options for tomorrow morning's newsletter: (a) 60 words, news-of-the-day register; (b) 60 words, voicy-curator register, lightly opinionated; (c) 30 words, just-the-news bullet. Each should make a reader click without spoiling the kicker."
Useful when the newsletter editor asks you for the blurb at 6pm and you want options.
22. Social pack
Prompt: "Act as a social editor. My story is
[paste headline + first paragraph + the single most quotable line from the piece]. Write a three-platform pack: a 250-character X post that doesn't lead with the headline; a LinkedIn post that frames it for[professional audience]in 80-110 words; a single Bluesky post under 300 characters. No emojis. No 'BREAKING'. No 'In this thread'."
The constraints are the value. The model defaults to social-media-cliche register without them.
23. Beat tip-sheet
Prompt: "Act as a beat reporter producing a Friday tip-sheet for myself. My beat is
[topic + geography]. I have these open threads:[3-6 lines describing stories you're tracking]. Output a tip-sheet with: top three items I should chase next week and why; two questions I should email a source about today; one anniversary or scheduled event next month I should mark in my calendar now. Format as a one-page memo to myself."
A weekly reset that costs five minutes. Helps catch the story you've been adjacent to without seeing.
24. Source CRM note
Prompt: "Act as a personal-research assistant. I just had a 30-minute call with
[source name, role]. My notes are:[paste your raw notes; redact off-record material]. Output a structured CRM-style note: who, role, contact info if mentioned, what they confirmed on the record, what they confirmed on background, what they declined to discuss, follow-up question I should send within a week, and a single-line reminder of the personal detail they mentioned (kid's name, hobby) so I'm not cold next time."
A note like this six months later is the difference between a warm relationship and a forgotten contact.
25. Post-publish reflection
Prompt: "You are an editor running a post-mortem. My story
[headline]ran on[date]. It got[traffic/comments/pickup if known]. Here's the published piece:[paste]. List: three things this story did well that I should keep doing; two reporting decisions I should second-guess on the next one; one specific story or angle this opens up for follow-up reporting. Be specific. Skip generalities like 'great storytelling.'"
Run this Sunday morning of your week off. The model is competent at structured retrospectives if you don't let it generic-praise its way through them.
Tips for getting better results
A few things that consistently make these prompts return better output.
Paste in actual reporting, not summaries of reporting. The model is much better at reshaping real material than at generating the substance of a story it cannot see. Give it your transcript chunk, your draft lead, your real bullet list of confirmed facts.
Set the constraint hard. "Under 70 characters", "no rhetorical questions", "no em-dashes", "no questions that assume facts not in evidence". The model will follow tight constraints; it will drift into cliche without them.
Treat every output as a draft, not a deliverable. The post is called "25 prompts" for a reason: each one is a starting point. Edit on top of it. Cut three of the headlines. Rewrite one of the questions in your own voice. The job is yours.
Verify anything specific. Names, dates, votes, citations, statutes, statistics. The model is fluent at sounding authoritative on facts it's making up. Check.
Keep one prompt window per story. Pasting reporting from multiple stories into the same chat thread gets the model confused and surfaces material from one story into the output of another. Open a fresh chat for each piece.
FAQ
Will my editor know I used ChatGPT?
If you copy the output and publish it unedited, yes, and so will the readers. The model has tells: padded "everything-and-the-kitchen-sink" lists, balanced "both sides" framing, soft hedging language, clean parallelism in headlines that real headlines never have. If you use the prompts the way they're designed (rewrite, edit, cut), the output reads like your work because most of it is your work. Some newsrooms have AI-disclosure policies; check yours and follow it.
Is it OK to use ChatGPT to write a headline?
To generate headline candidates, yes. To pick the headline, no. The model does not know your house style, your competition's homepage, what's already on your front page, or whether the kicker earns the hook. Use it for volume; pick from the list yourself.
Can I paste in confidential source material?
No. Free-tier ChatGPT inputs may be retained and used for training. Treat the prompt window as a publication. Use placeholders for source names, document IDs, and any privileged content; substitute the real material into the output yourself, locally, after generation.
How accurate are facts and dates that ChatGPT provides?
Treat every specific fact, date, vote count, statute, and citation as a hypothesis to verify, not a confirmation. The model is months or years behind current events on its base training data and will confidently produce wrong dates, fabricated cases, and invented experts. Anything date-specific or quote-specific must be confirmed against primary sources before publication.
What's the single most useful prompt in this list for a working reporter?
For most beats, Prompt 6 (source dossier) and Prompt 16 (ten headline candidates). Prompt 6 saves you 20 minutes of background reading before an interview. Prompt 16 saves you the hour you'd otherwise spend staring at a blinking cursor in the headline field. Both have the property that the model's failure modes (fabrication, cliche) are easy to catch on inspection.
Wrapping up
The honest answer about AI in journalism is that the model is a competent drafting and structuring tool, a poor reporter, and a dangerous fabricator. Used inside the boundaries above, it can give you back time on the parts of the job that aren't reporting. Used outside them, it ends careers. Pick a single prompt from the 25, run it on a story you're working this week, and edit aggressively on top of the output. That's the workflow.
If you found this useful, the next post worth your time is our PRSO framework guide, which is the four-part structure underneath every prompt above.