New research: Journalists should disclose their use of AI. Here’s how.

Note: Assistance with the survey data cited in this story came from Dr. Benjamin Toff at The University of Minnesota and was analyzed by Suhwoo Ahn, a postdoctoral research associate at The University of Minnesota. Some of the newsrooms used AI to transcribe or summarize their one-on-one interviews with community members. Those summaries and transcriptions were provided to Trusting News, and some were included in this story. In addition, Trusting News used AI to help analyze the collective interview response summaries and only included takeaways we verified to be accurate. Using AI allowed us to more quickly provide results to the journalism industry. To read more about our approach to using AI, click here.

News consumers are telling journalists they want the use of artificial intelligence in news coverage to be disclosed, and a majority prefer a more detailed disclosure about that use, including how AI was used in the reporting process and the specific tools utilized. This is according to survey and interview data collected by newsrooms in a recent Trusting News and Online News Association (ONA) newsroom cohort. 

The data was collected in July and August 2024 to explore audience perceptions of newsrooms’ use of AI by gathering input from their communities about how each newsroom individually uses or could use AI in their work. The participating newsrooms collected more than 6,000 survey responses. An overwhelming majority of respondents (93.8%) said they want the use of AI to be disclosed, and over half of the respondents said they want to know both how AI was used in the reporting process and the specific tools utilized.

Learn more about the survey below and in this AI Trust Kit.

The information gathered reinforces what previous research has found related to audience perceptions about the use of AI in news content, specifically the desire for disclosure of AI when it is used and the involvement of humans in the editing process. However, compared to previous research, these survey results show these news consumers are less comfortable with journalists using AI for tasks like translation and data analysis for investigative stories.

Based on the research Trusting News is recommending that journalists:

Further exploration of each of these recommendations is below. 

Important note: This research is the first step in a year’s worth of work we have planned focused on AI disclosures and ethics, funded by a grant from the Patrick J. McGovern Foundation. There is a lot to learn, and future research might point to different solutions. But at Trusting News, we believe in learning in public, and sometimes that means sharing knowledge as it evolves. To follow our work on this topic, subscribe to our weekly Trust Tips newsletter or fill out this form.

How to disclose your use of AI

Most of the respondents in the surveys (93.8%) said they want journalists to disclose their use of AI (2.9% said they did not and 3% said they were unsure).

This call for transparency reinforces what news consumers have been saying in other research. In these surveys, Trusting News and the journalists tried to gain more clarity around what should be included in disclosures. According to the data collected in our recent newsroom cohort, news consumers said the following information would be important to include in a disclosure:

  • Knowing why journalists decided to use AI in the reporting process (87.2% said this would be important)
  • Understanding how journalists will work to be ethical and accurate with our use of AI (94.2% said this would be important)
  • Knowing a human was involved in the process and reviewed content before it was published (91.5% said this would be very important)

Based on this, Trusting News is recommending disclosures include the following information:

  • Information about what the AI tool did
  • Explanation about why the journalist used AI, ideally using language that demonstrates how the use of AI benefits the community or improves news coverage
  • Description of how humans were involved in the process (assuming this is true)
  • Explanation about how content is still ethical, accurate and meets the newsrooms editorial standards 

Other research has shown people are more likely to trust information when they know how it was created. So explaining exactly what the AI tool did, why it was used and how humans were still involved helps demystify the use of the tool. Reassuring readers that the content still meets ethical and editorial standards, even with AI’s involvement, could also further strengthen trust in the content and newsroom. 

Building on other trust strategies focused on transparency, specifically what language works best to build trust when explaining reporting goals, mission or process, a disclosure about using AI could be written like this:

In this story we used (AI/tool/description of tool) to help us (what AI/the tool did or helped you do). When using (AI/tool) we (fact-checked, had a human check, made sure it met our ethical/accuracy standards) Using this allowed us to (do more of x, go more in depth, provide content on more platforms, etc).

Filling in the blanks using the format above, a disclosure could look like this:

In this investigative story, we used Artificial Intelligence to assist in the analysis of the public records received from the state. The reporters fact-checked the information used in the story by re-reviewing the public records by hand. Requesting public records to get beyond the "he said she said" is an important part of our reporting process, and AI allowed us to do this more quickly

With so many use cases of AI in journalism, the possibilities of how to write a disclosure including these elements seem almost endless. And while this research provided some clarity to questions around how to disclose the use of AI, there are still questions left to be answered, including where to put disclosures and how often to use them. Trusting News hopes to find answers to those questions through further research later this year and in early 2025. (To stay up-to-date on this research, fill out this form and subscribe to our weekly Trust Tips newsletter.)

For now, if you are using AI in your newsroom, it seems clear the audience wants to know about it and wants more details than a vague, “AI was used in this story,” statement. We can do more harm than good if our disclosures cause confusion or lead to inaccurate assumptions.

To see more examples of draft disclosure language and tips for writing policies or disclosures about your AI use, check out our AI Trust Kit.

How to engage with your audience about AI

Possibly even more important than the public’s strong demand for transparency, which these results highlight, is the value of directly asking your audience about their thoughts on AI.

It’s especially important to ask your own audience about their comfort level with different uses of AI, as not all AI applications will sit well with everyone. Some people and some communities may be more accepting of AI-assisted tasks like editing or fact-checking, while others may be uncomfortable with AI generating entire articles or making editorial decisions.

To help newsrooms engage and listen to their audiences about AI, Trusting News has developed two tools that are now available for any newsroom to use:

Trusting News created the survey for newsrooms to use, drawing on previous research and journalist feedback about transparency and disclosure. The goal was to better understand news consumers’ comfort levels with how journalists use AI and to gather insights on what should be included in disclosures. The survey aimed to address the current needs of newsrooms by focusing on how AI is being used or how newsrooms are considering using it in reporting rather than exploring its full range of potential uses. 

Something we learned by working with this cohort of journalists and having them use the survey and community interview guide is the importance of using the term “Artificial Intelligence” rather than just “AI” when having conversations with the public. This feedback from the journalists reinforces the importance of remembering to meet news consumers where they are: Some will be very familiar and even use AI tools frequently, while others might not know what you mean when you say “AI” or what an “Artificial Intelligence tool” is or could do. Be ready to explain basic elements of the technology and don’t assume the term “AI” is as widely understood as you may think, or as it is in your own social circles. 

Trusting News is making the full survey available so other newsrooms can replicate it in their own communities. Click here to make a copy of it for your own use, and review this AI Trust Kit for more guidance on how to use it.

In addition to the survey, the participating journalists conducted in-depth interviews with their news consumers to learn what type of AI use community members are comfortable and uncomfortable with, while also gaining more insight into how their news consumers want to be notified if AI is used in the journalistic process. 

Some of the themes from those interviews include:

  • Most participants were cautious about AI’s role in journalism, expressing concerns about its accuracy and potential to remove human jobs. (The concern over job loss in newsrooms came up often during the one-on-one interviews.)
  • A commonly shared thought was that human oversight is essential when AI is involved. For example, participants wanted journalists to review AI-generated translations, captions and content.
  • Most want newsrooms to disclose AI use on an article-by-article basis, like in footnotes or as part of the content. 
  • Many participants were supportive of newsrooms providing educational resources about AI. Some said newsrooms could offer workshops, guides or articles explaining AI’s role in reporting, saying it would help people differentiate between ethical AI use and misuse.

Research shows that journalists simply taking the time to talk and listen to people builds trust and goodwill. Click here to see the full community interview guide and check out the AI Trust Kit to learn more about how to use it in your community.

How to invest in educating your community about AI

More than 80% of the survey respondents said it would be helpful if a newsroom provided information and tips to better understand AI in general and detect when AI was used in content creation. The demand for education around AI became more clear during the one-on-one interviews.

When asked about education and engagement opportunities around AI, people told the journalists:

  • To offer workshops and produce guides/glossaries or articles explaining AI’s role in reporting
  • To offer opportunities for people to learn about AI’s capabilities and limitations, potentially through outreach events or interactive/Q&A sessions
  • To produce more in-depth articles and transparency around how AI is involved in journalism 
  • They appreciated being consulted and wanted newsrooms to continue engaging the public in conversations about AI

Some, particularly those who said they were knowledgeable about AI, were less interested in educational opportunities but said they still believed these efforts could benefit the general public and smaller news organizations.

Overall this shows that enhanced media literacy would be welcomed by the public. It also highlights an essential role journalists can play in educating the public about new technologies. 

By helping readers understand AI, what it is, where it’s being used and its impact on news production and the online information ecosystem, journalists also have an opportunity to build trust. By being helpful and useful to audiences, journalists and news organizations can be seen as a trusted resource on the topic. 

Model responsible behavior and exploration

At Trusting News, we believe journalists have a unique opportunity to build trust by not only being transparent about their use of AI but also by actively seeking and incorporating audience feedback. This moment offers a chance to model responsible and ethical AI use, showing the public how AI can be used responsibly and ethically while providing valuable learning experiences. 

It’s also a chance to demystify new technology and show what it looks like to engage with new tools. If people are feeling confused and overwhelmed, we can help them instead feel empowered and curious if we talk publicly about what we’re experimenting with and learning.

By clearly demonstrating how AI is applied in journalism and engaging with readers’ concerns, journalists can help the public better understand and trust the technology. Embracing this chance to build trust through thoughtful AI practices can set a positive precedent for the future of journalism and enhance public comprehension of AI, while building trust in our newsrooms. 

What’s next

The journalists who participated in this cohort are planning on using their survey data and community interviews to either develop policies around AI use in their newsrooms or improve existing ones. 

As policies and disclosures are developed, Trusting News will be sharing them by updating the AI Trust Kit and highlighting them in the weekly Trust Tips newsletter. If you are developing or have developed a policy or way to disclose the use of AI in your journalism, please let us know by contacting Lynn Wash at Lynn@TrustingNews.org

In addition, Trusting News is working with researcher Benjamin Toff at the University of Minnesota to conduct further research focused on determining what disclosure language could build trust.

Once the research is complete there will be another cohort opportunity for newsrooms to write and test disclosures by adding them to published news content and learning how users respond to them. The newsrooms selected for the cohort will receive stipends for participating. We anticipate launching an application for it in early 2025. (To stay up-to-date on this research and the cohort opportunity, fill out this form and subscribe to our weekly Trust Tips newsletter.)

Lastly, if you would like Trusting News to help guide your newsroom or a group of newsrooms in your area or company through a similar project, let us know by completing this form. We’re also looking for both researchers and funding partners interested in collaborating on testing strategies and building knowledge around the efficacy of AI policies and disclosures. Interested? Get in touch at info@TrustingNews.org.

Trusting News would like to thank the newsrooms who made this work possible:


At Trusting News, we learn how people decide what news to trust and turn that knowledge into actionable strategies for journalists. We train and empower journalists to take responsibility for demonstrating credibility and actively earning trust through transparency and engagement. Learn more about our work, vision and team. Subscribe to our Trust Tips newsletter. Follow us on Twitter and LinkedIn. 

lynn@trustingnews.org | + posts

Assistant director Lynn Walsh (she/her) is an Emmy award-winning journalist who has worked in investigative journalism at the national level and locally in California, Ohio, Texas and Florida. She is the former Ethics Chair for the Society of Professional Journalists and a past national president for the organization. Based in San Diego, Lynn is also an adjunct professor and freelance journalist. She can be reached at lynn@TrustingNews.org and on Twitter @lwalsh.