Developing an AI usage policy in your news organization

The American Journalism Project launched its Product & AI Studio in 2023 with support from OpenAI and additional support from the Patrick J. McGovern Foundation. The studio explores how generative AI and other emerging technologies can serve local news, sharing insights along the way. This summer, the Product & AI Studio asked members of our portfolio with active AI grants whether their organizations had published public AI usage policies. This post explores the AI policy adoption journey of those organizations. 

As more news organizations adopt AI into their workflows, it’s important that they have clear operating policies in place around usage, privacy, accuracy and transparency. An AI policy that highlights transparently how AI is used at the organization can engender trust with audiences and protect newsroom integrity.

Nearly three out of four journalists working in newsrooms have already tried generative AI on the job, according to a 2024 study by the Associated Press (AP). Almost 70% of respondents said they’ve used AI to help produce stories, graphics or other editorial work. At the same time, only about 20% of local news organizations have public AI usage policies, according to a worldwide study of 101 journalists, matching a broader trend globally.

To better understand this trend, we surveyed grantees across our 2025 AI-focused cohort, which includes 28 newsrooms from AJP’s portfolio. About 50% are engaging in AI usage policies — four have published, public-facing AI usage policies, three have internal usage policies, and six more are in the process of drafting their policies.

Thoughtful AI policy development is an important corollary to thoughtful usage, and we wanted to learn more. We interviewed two newsroom leaders over email — one with a policy in place and one without — and share their comments here. We’ll start with barriers and then move to recommendations.

Why news organizations struggle to implement AI policies

News organizations cited a number of barriers to implementing their policies, and we’ve outlined a few here.

Uncertainty regarding standards

Organizations such as the AP and Society for Professional Journalists (SPJ) have published guidelines, including “Standards around generative AI” and “The ethics of using AI.” However, these standards are different from operating policies. As a result, news organizations have been left to create AI policies on their own, sometimes with a limited understanding of all the ways AI can, or is, being used. 

Cynthia Tu, data reporter and news technology specialist at AJP-supported Sahan Journal, said this uncertainty and the lack of policies to reference were major roadblocks in developing their AI usage policy, which was published in 2024. 

“We had trouble finding materials to reference, since very few organizations of our scale had published AI policies when we started,” Tu said.

To help with this, organizations like Trusting News and Poynter have made AI “starter kits” available to help news organizations sort out what an AI policy should contain. 

Managing audience perceptions

A recent report from Trusting News found that readers are often “skeptical, uncomfortable and worried” about the use of AI and how it’s brought into journalism. So it’s no wonder that some news organizations worry that publicly disclosing AI usage could damage trust with their readers. 

The report found that more than 60% of survey respondents said “news organizations should only use AI if they establish clear ethical guidelines and policies around its use,” while 30% said AI should “never be used under any circumstances.”

Recognizing that 30% is a meaningful segment, the Trusting News report suggests that journalists shouldn’t reject AI altogether, but should be sure to prioritize audience perception and understanding of how AI is used in newsrooms. 

The report recommended that local news organizations carefully discuss internally how it will be used, be transparent with the public about those decisions, and take steps to help audiences feel more comfortable with AI’s role in journalism.

Limited time and capacity

Drafting an AI operating policy for a news organization requires careful thought, research and collaboration across editorial, legal and audience teams. But many news organizations — especially smaller, local outlets — operate with limited staff and tight budgets, leaving little bandwidth to focus on long-term AI internal policy development. 

As a result, conversations about AI policies stall simply because the capacity isn’t there. 

Like most news organizations, the AJP-supported El Paso Matters is juggling multiple priorities with limited staff.

“It is a time and bandwidth issue,” Robert Moore, president and CEO of El Paso Matters, said. “We’re down a reporter who is on parental leave, which adds to our challenges. It’s the typical challenge for a small organization.”

Bandwidth is a valid concern, as these policies can take months to develop and rely on cross-departmental effort during creation. The Sahan Journal spent about two months researching, drafting and approving its AI usage policy.

While these are all valid challenges for newsrooms, developing an AI usage policy should still be considered essential. A clear policy can help build trust and credibility with readers through transparency. It can also help journalists better understand the ethical considerations surrounding their AI use.

How to get started on an AI usage policy for your news organization

Step one: Start with an AI committee.

If you want to get started on an AI usage policy and you aren’t sure where to start, Poynter suggests starting with an AI committee including a leadership representative and individuals from functions that will be affected by the policy, such as growth, editorial and business teams.

At Sahan Journal, Tu collaborated with Chief Growth Officer Michael Tortorello to draft the policy and sought input from other departments where it affected their workflow.

Step two: Divide your policy decisions into three different use cases.

This committee can start by organizing AI usage policy decisions into three categories:

  • Editorial – audience-facing uses: This could look like using AI as a content-editing coach or to generate summaries for social media. 
  • Editorial – back-end reporting assistance: This might entail using AI to help transcribe, parse data or develop meeting summaries.
  • Business – strategy and fundraising: This might be using AI as a strategic coach, or as a business manager to help plan meetings or generate campaign messaging.

Breaking AI usage into these three use cases can make it less intimidating to start internal policy discussions, Poynter suggests. Each use case comes with its own set of questions and ethical concerns, with a different team member who is best suited to answer them. The Poynter report also says sorting use cases this way helps the AI committee break up the workload and tackle one area at a time instead of trying to solve everything at once.

For example, in Sahan Journal’s AI policy, they tied many of the principles back to their strategic vision document, which had input from various stakeholders.

Step three: Identify your core elements grounded in your existing mission and standards.

No matter the size or focus of your news organization, experts suggest an AI policy should include core elements that align with organizational mission, protect editorial and professional standards, and maintain audience trust. 

Tu also said a few important components of the AI policy at Sahan Journal include:

  • Disclosure of AI when used in editorial content production,
  • Banned and permissible use of AI for the editorial staff,
  • Stating that we will not use AI to replace our staff, and
  • Disclosure of our partnership with OpenAI through AJP and that Sahan Journal maintains editorial and organizational independence from donor support. 

“We believe that these were important themes to include to build trust with our readers and stakeholders,” Tu said. “The value statement in our policy also helps our staff understand that the organization is committed to using AI ethically and responsibly.”

In addition to clear value statements, Poynter recommends discussing the following:

  • Purpose and values: Why the policy exists and how it ties to the organizational mission
  • Transparency: How and when audiences will be told AI was used
  • Editorial integrity: Rules for fact-checking, review and limits on AI-generated content
  • Data privacy: Protecting sources, reporting and donor data from third-party tools
  • Approved use cases: Clear do’s and don’ts across editorial, fundraising and operations teams
  • Accountability: Who oversees AI use and how often the policy is updated
  • Equity and bias: A commitment to checking outputs for bias and avoiding harmful use

Tu said having these policies in place helped their journalists understand how to use AI and helps audiences understand how AI is used responsibly.

“The AI policy comes in handy when readers reach out with questions or skepticism regarding why we use AI,” Tu said. “We were able to show that our use of AI is guided by our organization’s values and mission.”

Step four: Involve your audience and practice transparency.

As with any news organization endeavor, building your AI usage policy also represents an opportunity to involve your audience. In 2023, The Salt Lake Tribune, an AJP-supported news organization, started an online discussion of what this looks like with its op-ed piece titled Where The Tribune will — and won’t — use artificial intelligence, where it outlined to readers how AI would be used in the news organization. This emphasis on careful internal policymaking and openness around AI use reflects the trend of transparency in the field.

Tu said that at the Sahan Journal, they have made transparency and ethical caution central to their policy on AI use, as a way to proactively address audience perceptions.

“In our AI policy, we included an intro section that spells out why Sahan [Journal] is incorporating AI into our workflow and Sahan’s values,” Tu said. “The section also states that Sahan [Journal] recognizes the limitations and ethical concerns of AI and is committed to experimenting with AI with great caution.”

Step five: Study other news organizations’ policies.

While the field is still developing standards and best practices around AI use, there are some newsrooms that have developed and published policies, and it can help to look at what they have done. 

To see examples of published AI usage policies, check out this collection of policies curated by Trusting News. Here, you can check out an assortment of policies, such as the Seattle Times and USA Today. 

Or, you can check out these external AI policies from AJP member newsrooms: 

While these examples can help serve as a guide to creating an AI policy, remember that each newsroom’s policy should be unique, and that uniqueness should derive from meaningful ethical discussions within your organization.

By developing clear AI usage policies, local news organizations can help audiences better understand how and why AI is used in journalism. These policies make newsroom practices more transparent to the public while helping journalists navigate the complex ethical questions surrounding AI. In creating a policy, newsrooms can build trust with their communities and reinforce their commitment to transparency, accountability and responsible innovation.

 

Get the latest stories, insights and updates from the American Journalism Project delivered straight to your inbox. Join the list.

 

This piece was developed and authored by Maggie Cogar, a consultant working with the American Journalism Project’s Product & AI Studio and Startups Studio.