A 2023 survey about news organizations and AI conducted by the World Association of News Publishers (WAN-IFRA) found that half of the newsrooms surveyed are using Generative AI products, though only 20 percent have guidelines in place about how to use the products. As this recent article, “AI Generates Debate Over Newsroom Ethics,” reports, “some media experts believe the journalism industry should adopt uniform standards on the new technology.” While Generative AI products can assist with editing copy and pulling large data sets, serious concerns exist around plagiarism, copyright infringement and the broader issue of trust in a rapidly changing media landscape that may include AI-generated content, both written and audiovisual.
Guides & Reports
Articles
- Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that (2023) This article from NiemanLab focuses on the necessary criteria for creating AI recommendations for news organizations and contains a consistently updated list of newsroom guidelines (in the US and globally) for the role of AI.
- Spotting the deepfakes in this year of elections: how AI detection tools work and where they fail (2024) Authored by two staff members at Witness, an organization that “helps people use video and technology to protect and defend human rights,” this article works to “evaluate and understand the outcomes provided by publicly accessible [AI] detectors,” particularly relevant to individuals and organizations involved in verification processes.
- All the news that's fit to fabricate: AI-generated text as a tool of media misinformation Kreps, S., McCain, R. M., & Brundage, M. (2022). All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science, 9(1), 104–117. This study of the credibility of AI-generated texts has three key findings: “individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views.”
- Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems April 2023 Article No.: 436 Pages 1–20. This article is a comparative study of AI-generated misinformation with human-created misinformation about the COVID-19 pandemic and evaluates two common misinformation solutions – detection models and assessment guidelines.
Books