Trust and Technology

Trust and Technology 

The tools of journalism — and of communication more broadly — are evolving rapidly. We want to empower journalists to be leaders in helping communities understand these changes while modeling ethical and responsible use of technology. 

We’re asking the question: How can technology and news products help journalists build trust? We don’t have all the answers, but we see this work as essential to the future of building trust with news consumers. Here’s a look at our approach. 

We want to help newsrooms: 

  • adopt and create new technologies and tools to build trust
  • use technology and products to better engage with, listen to and reach communities
  • update systems to incorporate new tools and technologies
  • allow technology to increase efficiency and improve workflow so journalists have more time for transparency and listening/engagement work
  • be ethical and responsible leaders in using technology to share information
  • help their communities understand and use technology while being seen as a trusted resource for information on the topic

Using technology to build trust

Explore different ways technology and news product can help build trust.

Our internal AI policies

AI in newsrooms

Using AI to eliminate bias

Example AI policies

Products, tools, CSM

Share your AI policy

We’re looking for both researchers and funding partners interested in collaborating on testing strategies and building knowledge around the efficacy of AI policies and disclosures. Interested? Get in touch at info@TrustingNews.org.

Our policies around AI 

How do we incorporate AI in our work? At Trusting News we believe technology, including AI, can be a force for good in journalism.

By embracing AI and experimenting with it in our own work, we’re aiming to improve how we connect with our users (journalists and the journalism community) and set an example of ethical and responsible use of the technology.

We want to explore how AI can help us build a better relationship with journalists and help us be more productive and efficient in our work.

How we have used AI and would consider using AI
  • Summarizing. Assisting us with summarizing content we have produced or content that has been produced by newsrooms to be used for training materials or internal notes.
  • Repurposing. Helping us share content we have produced across multiple platforms. We always edit and check what was produced for accuracy and context.
  • Headline/name suggestions. Providing ideas for headlines or name suggestions for published content, training offerings or similar projects. 
  • Research. Assisting us with internal research necessary to help us work with newsrooms. We make sure to verify any information shared before acting on it.
How we are not using AI
  • Creating images. At this time we do not anticipate using AI to help us create images for internal or public-facing content.
  • Creating content from scratch. We do not plan on having AI create content for us from scratch, meaning formulating a prompt and using what it produces in our content. We plan to use AI as a tool to improve or supplement our content but not as a tool to create for us. 
  • Publishing directly. We will not publish content created by AI without first always editing, fact-checking and reviewing.

If we use AI to help us create content, we will disclose our use of AI and include an explanation about how it was used. We have done this before in this Medium post by adding the following disclosure to the article:

 

AI in our work with newsrooms

Throughout different technological disruptions, we have seen journalists struggle to adapt. With AI, history could repeat itself. We are seeing some major news players engage with and think about the use of AI, while others are saying they will not use it at all, don’t have time to think about it, lack the tech skills to navigate it or are using it without any transparency or consistency. 

At Trusting News, we want to make sure news organizations are thinking carefully, strategically and ethically about using AI while also using it to help them build trust. We believe newsrooms can accomplish this and want to help them explore how to achieve this by focusing on the following areas: Transparency and Disclosure, Education and Engagement/Listening.

Transparency and disclosures

As is central to our mission, we want newsrooms to communicate how they are using the technology with their communities. This will create transparency and understanding around a technology that can be scary, confusing and untrustworthy. We also want to make it easy for smaller news organizations to engage with the technology and create policies around their use of it so all newsrooms can benefit from it and build trust through their use of it. 

Poynter has published a guide to help newsrooms develop an ethics policy about their use of AI. At Trusting News, we see our role as helping newsrooms talk publicly about those policies while making sure it is easy to tell when AI has been used.

Journalists should not try to pass work or ideas off as their own if AI helped in the process. People will feel tricked and deceived, and that will only add to the negative thoughts they may have about news and “the media.” By disclosing and explaining your use of AI, you’re avoiding that. You’re also helping them better understand what AI is, how it can work and how it can be used. People will appreciate that, especially with a topic that may seem so confusing or overwhelming for some. 

While we believe disclosure and transparency are necessary, we don’t yet know enough about what a disclosure should include, how often it should be repeated and what language should be used to help build trust. (We have research in the works on those topics.) Based on other transparency work we have done with newsrooms, we want to start by encouraging newsrooms to do the following:

  • Add explanations of AI use to each story they apply to
  • Write explanations that are specific enough for people to understand how the technology was used (not a generic statement)
  • Include definitions or short explanations about what technology was used and how it works
  • Include language about why you used the technology. If it was to save time or increase efficiency, say that. Then consider sharing what you’re doing with that extra time. Maybe by saving time drafting headlines you now have more time to respond to comments or produce stories otherwise left uncovered, etc. If you explain how the use of the technology is helping you better serve your community by providing depth, better sourcing, better SEO, more reporting, etc. they are more likely to not assume you’re doing it to cut corners or because you’re lazy.

With research partners, Trusting News has been testing how newsrooms can best explain their ethics and news processes since 2016. We also have worked with researchers to study where transparency elements are most effective, what language works best and what to explain. 

Building on years of research and knowledge Trusting News is planning to work with researchers and newsrooms to determine what language should be used in disclosures and policies, where disclosures should go, how often they should be used and most importantly, how news consumers respond to all of it.

We’re looking for both researchers and funding partners interested in collaborating on testing strategies and building knowledge around the efficacy of AI policies and disclosures. Interested? Get in touch at info@TrustingNews.org.

Education

Educating the community about AI and technology is one way we believe journalists can build trust with their audience. By demystifying complex technologies and explaining how they’re used in journalism and their community, journalists can foster transparency and understanding. They can also model responsible exploration with tools that can seem intimidating. This education can empower the audience to critically evaluate the role of AI in the world and make decisions about how they use it and want to see it used in their communities.

If journalists are seen by their community as a trusted and useful source of information and explanations about AI, it can build a stronger bond between journalists and their community, bolstering trust and credibility in the news they deliver. We want to help newsrooms explore how to best educate their communities about AI and help them use resources that are already out there from groups like The News Literacy Project and Aspen Digital.

Engagement

By actively involving the audience in discussions about the use of AI in news production, journalists demonstrate a commitment to transparency and inclusivity. This approach allows journalists to understand community concerns, preferences, and expectations regarding AI, leading to more tailored and relevant reporting.

By soliciting feedback and insights from the community, we believe journalists can foster a sense of partnership and co-creation, empowering the community to actively participate in shaping the future of journalism. By valuing the input of their audience on AI and technology-related matters, journalists can build stronger relationships built on mutual respect, understanding, and trust.

We also believe AI can help journalists be more effective and efficient at engaging with their audience. Gather has discussed how AI can help build a community mind map and multiple newsrooms are exploring how AI chatbots can help provide more information to their users more quickly, including restaurant recommendations. We want to help newsrooms explore using the technology to help make engagement easier, more efficient and more easily part of the reporting workflow so it happens more often and consistently.

Using AI to eliminate bias, polarization

Too often, journalism amplifies extreme views and ignores more nuanced ones. In addition, news consumers make assumptions about journalists’ own values through the way stories are framed, sourced and written.

At Trusting News, we’ve talked about how this can lead to distrust in journalism. We’ve also worked with partner newsrooms to help journalists create less polarizing content and be aware of potential bias in reporting. We know it is possible for journalists to publish content like this, but we also know it can be time-consuming and challenging.

What if technology could help? Can AI help identify what’s absent, incomplete or offensive in story pitches or story drafts, while there’s time for the journalist to address the problem? We believe so, and we’re ready to test the ideas with working journalists. 

At Trusting News we want to work directly with journalists to explore the following questions:

  • How journalists can build AI tools into the reporting process to help them be aware of diverse perspectives on the issues they are covering
  • How technology can help journalists gain insights into their own potential stereotyping and biases and discover alternative viewpoints and diverse sources
  • Whether the information learned promotes self-reflection, awareness of bias and willingness to consider diverse perspectives
  • How newsrooms can help the public better understand AI by discussing their use of it publicly

If you want to learn more about our thoughts in this area read this newsletter and a summary of what we learned after talking to journalists about this idea here.

We’re looking for both researchers and funding partners interested in collaborating on testing strategies and building knowledge around the efficacy of AI policies and disclosures. Interested? Get in touch at info@TrustingNews.org.

 Examples: Newsroom policies around AI, technology

We want to learn more about how news consumers will respond to policies about AI use in journalism to better understand what language and approaches will work best and be the most effective. While we continue to work toward that goal, we do want to point to some policies that exist and highlight elements from them that demonstrate best practices we know about transparency and building trust in other areas of journalism.

If you have a public-facing policy about your use of AI or have used a disclosure about your use of AI in a story, we would love to see it. You can use the embedded form below to submit it to an Airtable database where we are collecting examples.

The Red Line Project uses clear language to explain how AI could be used and provides examples of what that may look like to the news consumer (including assisting in research to find sources or creating photo illustrations that would be clearly marked as being created by AI) and how it will not be used (including writing news articles).  

The Salt Lake Tribune explains why they would use AI: to extend staff to better serve their readers. This shows they are thinking about technology as a way to better serve the community, not lay off staff or get more clicks (assumptions news consumers could make). They also give a specific example of how an AI-supported transcription service is making it possible for them to be more efficient in the reporting process, allowing them to cover more areas of the community. Other specific examples they provide about how it is helping them better serve their community include: creating content on different platforms and extracting information from pdfs for stories. 

SFGate makes it clear they are experimenting with AI but no content will be published without an editor (a human) reviewing it. We think making this distinction between whether or not there was a human involved in producing content will be something news consumers want to know and probably something they have strong preferences/opinions about. They also say content will be fact-checked before published and have an opportunity for their users to contact them with questions and feedback. 

Wired explains their use of AI in an easy-to-read list of how they will and won’t use the technology. The language is clear and provides examples of how this could look to the news consumers when they consume content from Wired. They also provide a note about when and why the policy was updated at the bottom of the page. We believe how newsrooms use AI will change and we should be ready to make changes to any policies as that happens. 

If you are a researcher interested in trust and perceptions of news, we’d love to talk about potential collaborations. Please email info@TrustingNews.org.

Product, tools and CMS

How can technology and news products help journalists rebuild trust with their users?

We’re curious and interested in exploring solutions to this question because we want to make building trust as easy as possible for journalists. If technology can help make that happen, would that result in more journalists adding transparency and engagement strategies into their workflow? More news content that’s reflective of communities? Easier access to journalists? Clearer explanation of how news works?

If the answer to any of those questions is yes, we believe we could make some real progress on helping journalists build trust with their users. 

In the past two years, we have hosted brainstorming sessions and co-led a training program with the International Center for Journalists focused on how technology and news products can help build trust. We have also led sessions and discussions about how CMS’s can help make building trust more easier. You can learn more about those ideas in this slide deck from an NPA session, which explains more about why this is important and includes examples of how newsrooms have built transparency and engagement tools into their CMS. 

We’re looking for both newsrooms and technology companies interested in testing how the tools of journalism can make earning trust easier. Interested? Get in touch at info@TrustingNews.org.

Share your examples 

If you have a public-facing policy about your use of AI, a disclosure about your use of AI in a story, or explaining your use of product and technology in coverage, we’d love to see. Use this form to submit it to an Airtable database where we are collecting examples. 

Having trouble viewing the form? Open it in a browser here.