Last updated December 15, 2024

The IJF is a non-profit nonpartisan newsroom focused on serving the public interest. We believe in radical transparency. These, and all of our editorial principles, will guide us in deciding whether and how to use artificial intelligence in our work.

We consider AI to be a tool that has the potential to serve our mission. At the same time, the utmost vigilance and human oversight is needed to ensure its responsible use. Even if we are skeptical about the current state of AI accuracy and reliability, we cannot ignore the expanding presence of this phenomenon in our lives. It is transforming how we look for information and the records we seek to do our journalism.

This note is meant both to guide our staff and inform our readers about how we view our responsibilities concerning AI.

We were inspired by a number of other newsrooms' AI policies in drafting our own. We consulted this study on AI policies in 52 newsrooms around the world and this survey of 880 participants by the AI in Journalism Futures project. We also examined the policies of the New York Times, the Guardian, the CBC, the Globe and Mail, the News Media Alliance, Poynter Institute and the Radio Television Digital News Association.

Here are the steps we will take when using AI at the Investigative Journalism Foundation:

  1. We will not use generative AI to directly write news stories or generate photos for publication.
  2. We may consider using AI tools to aid in the research phase of news production by helping us understand or synthesize information, but these are subject to the principles outlined below.
  3. Human review and oversight of all material produced by a generative AI system or fed into such a system is our paramount principle. No such material should be published or relied upon without human vetting.
  4. Output from any AI tool must be checked for accuracy, proper sourcing, plagiarism, incorrect statements and all forms of bias. We will follow the Poynter Institute’s process for identifying bias in AI systems. This includes ensuring our training data comes from varied sources, doing initial training with anyone working on AI tools and regularly reviewing systems for bias. 
  5. Particular attention will be paid to guard against incorporating or repeating racial and gender bias that is present in many of the training sets various AI systems rely on.
  6. Because generative AI tools are not always transparent about their sources of information, we should be mindful of respecting copyright and crediting the original work product of others.
  7. Employees should check with their supervisors before offering sensitive, confidential or unverified data to a generative AI platform to facilitate analysis. Employees must never enter any identifying information about confidential sources into a generative AI platform.
  8. While AI tools may be useful in crunching numbers and finding patterns in data, we will never rely on AI-generated results as the sole source of research in our stories. We will always search for multiple sources and points of corroboration.
  9. When we use AI tools to facilitate our journalism, we will be transparent and inform our audience of this fact. This should include any measures we have taken to mitigate or eliminate the risks of using these tools.
  10. We may also use AI tools in the non-editorial side of our operations, subject to human oversight and the principles outlined above. For example, our Appointments database relies on ChatGPT to extract values like the name of the appointee and their new title from paragraphs of text published by the government. We are also experimenting to see if LLMs can standardize entities (e.g. Rogers Communications and Rogers Communications Inc.). Currently, we do this manually for our Procurement database, but if LLMs can speed up the first step of the process before human review, that could save time.
  11. Because new AI tools are being developed every day, staff should check with their managers before proposing to rely on any tool for news research purposes.