Trusting News AI Resource: Spotting and disclosing AI in news content you don’t control

News consumers overwhelmingly say they want the use of AI to be disclosed. But sometimes you might be unaware if AI was used because the content is coming from a person, vendor or department your newsroom doesn’t have oversight over.

The reality, though, is that if a news consumer suspects AI use in content published on your website, in your newsletter, on your social feeds or on-air, they will associate that with you — whether it was from an outside vendor or your in-house team.

Use this guide to ask the right questions, make informed decisions and build trust when AI might have been used in the creation, packaging or distribution of content you don’t fully control. We have guidance for the following situations:

For more resources on how to build trust with your use of AI, review our AI Trust Kit.

General questions/considerations for all content

  • Could AI tools have been used to generate, alter or summarize this content — even if not by us?
  • If AI was used, does the public have a right to know? Would they want to know? How would they react if they found out and we had not told them?
  • Do we have policies or expectations in place to ask, confirm or disclose how AI was used in this case?
  • Is there contract or collaboration language we can check that clarifies expectations with partners, freelancers, vendors or platforms?

Freelancers & AI

For publications

Questions to ask freelancers

  • Did you use AI tools (e.g. ChatGPT, image generators, transcription software) in any part of your reporting, writing or editing process? If so, how and where?
  • Did you independently verify any AI-generated outputs for accuracy and to meet our ethical standards? Please explain how.
  • Are your sources aware of or affected by AI use in this story?
  • Please provide an AI-use disclosure for us to consider using publicly, so we can inform our audience about the use of AI in this content.

What to prepare for freelancers

  • Add AI disclosure and usage policies to all freelance contracts and agreements.
  • Share AI disclosure and usage expectations of freelancers anywhere you are soliciting pitches, have a pitch guide, etc.
  • Explain expectations around your content being fed or used in generative AI tools and systems.
  • Explain how or if freelance content will be fed or used in generative AI tools and systems.

For freelancers

Questions to ask publications

  • Will my content be edited, summarized or altered using AI tools after submission?
  • Will my content be fed into generative AI tools?
  • How can your content be used with generative AI tools?
  • Will I be credited if AI-generated outputs are derived from my reporting?
  • If I use AI in my journalism (for research, writing, editing, etc.) would you like to know? And if so, what is the best way to inform you?
  • Will any disclosure be included if AI is used in the packaging of my story?

What to prepare for publications/clients

  • Publish or at least draft an AI use policy for your own work. You want to be able to explain your use of AI (or non-use of AI) and should have a consistent approach to how you use it and, more importantly, how you make sure your content is still accurate and ethical. (Use this worksheet to help create one.)
  • Think about whether you want your content fed into generative AI tools and consider a conversation with or a request to the publication accordingly.

Partner content (cartoons, crosswords, images, wire services)

Questions to ask partners

  • Does your content include AI-generated or AI-edited elements?
  • Do you label or disclose that when applicable?
  • Are your cartoonists, illustrators, or crossword creators using generative tools?
  • What is your AI use policy for freelancers who contribute content to you? Do you require that they disclose their AI use?
  • What should we do if we suspect AI use in your content and it was not disclosed?
  • If you use AI in your content production, how do you check the content for accuracy and journalism’s ethical standards?

What to prepare

  • Ask that any AI use in visual, written or interactive partner content be disclosed to you. You can do this by adding the request to agreements or contracts. Consider language like this: “Our publication is committed to transparency. If your content includes AI-generated elements (images, text, puzzles), we require you to disclose this in advance so we may inform our readers.”
  • Explain how or if you will label or explain the AI use to your readers.
  • Create a policy or workflow for following up with partners you suspect have used AI but have not disclosed its use. Be sure to cover how you will approach sharing the use of AI publicly and how to verify AI use if they deny its use but you still suspect its use.

Sources & letters to the editor

Questions to ask internally

  • Do we allow AI-generated letters to the editor or other solicited or submitted opinion content?
  • If we allow opinion content that has been created with the assistance of AI, do we want them to disclose that? If so, how?
  • Do we ask sources whether their quotes, emails or other written responses were generated or edited by AI? (Consider submission-oriented products like collecting responses from lawmakers for voter guides or reviews for favorite restaurants/things to do around town, etc.)
  • What’s our policy if we discover that AI-generated material was submitted without disclosure? In the case of a source, do we disclose that they submitted AI-generated content in our reporting? Does the public deserve to know?

What to prepare

  • Include a line in online submission forms: “Was any part of this submission created or edited using AI tools? If so, please describe.”
  • Include language explaining that you expect disclosure from people who submit opinion content. Consider language like this: “We ask letter writers to disclose whether they used AI tools (such as ChatGPT) in writing their letter.”
  • Also include language when soliciting submissions for products like voter guides: “We ask candidates or groups to disclose whether they used AI tools (such as ChatGPT) in crafting their responses.”
  • Include a question for sources during interviews (included in emails, asked during/after interviews). Consider language like this: “Did you use AI to assist or create any answers to these questions?”
  • Create a policy or workflow for following up with individuals you suspect have used AI but have not disclosed its use. Be sure to cover how you will approach sharing the use of AI publicly and how to verify AI use if they deny its use but you still suspect its use.

User-generated content (UGC)

Questions to ask

  • Are we checking whether photos or submissions were generated or manipulated with AI?
  • Does the content show signs of being altered by editing software or created with AI?
  • Is our newsroom prepared and trained to spot images or content created with AI, altered to be mis/disinformation?
  • How can we prepare our newsroom to detect altered content and check content for accuracy? Do we have any reverse image search or AI-detection strategies or tools in place?

What to prepare

  • Require submissions to confirm content is original and not AI-generated or stolen. Consider language like this: “By submitting, you confirm that this photo/content was created by you and not generated or altered using AI tools or other editing tools.”
  • Establish a review process for UGC to check for authenticity and accuracy but also to make sure it does not violate copyright laws.

Advertisers and sponsored content

Questions to ask

  • Are advertisers using AI-generated images, testimonials or copy?
  • Are disclaimers provided when AI was used?
  • Do we want them to provide disclaimers if they use AI? How much information do we want from them regarding their use of AI?
  • Could AI-generated ad content be mistaken for newsroom content? How can we make sure it is differentiated? (For more on how to label advertising or sponsored content, see our Funding Trust Kit.)

What to prepare

  • Require advertisers to disclose AI use.
  • Establish a review process of sponsored content to screen for signs of AI use. Look for hallucinations or inaccuracies.

If you have other suggestions or questions you think should be added to this resource, send them to Assistant Director Lynn Walsh: Lynn@TrustingNews.org.

At Trusting News, we learn how people decide what news to trust and turn that knowledge into actionable strategies for journalists. We train and empower journalists to take responsibility for demonstrating credibility and actively earning trust through transparency and engagement. Learn more about our work, vision and teamSubscribe to our Trust Tips newsletter. Follow us on Twitter, BlueSky and LinkedIn. 

lynn@trustingnews.org |  + posts

Assistant director Lynn Walsh (she/her) is an Emmy award-winning journalist who has worked in investigative journalism at the national level and locally in California, Ohio, Texas and Florida. She is the former Ethics Chair for the Society of Professional Journalists and a past national president for the organization. Based in San Diego, Lynn is also an adjunct professor and freelance journalist. She can be reached at lynn@TrustingNews.org and on Twitter @lwalsh.