Americans increasingly distrust the media, with half of them saying national news outlets intend to mislead or deceive them into adopting a specific viewpoint, a Gallup and Knight Foundation study found in February.
A recently introduced news site, Boring Report, thinks it’s found an antidote to public skepticism by enlisting artificial intelligence to rewrite news headlines from their original sources and summarize those stories. The service says it uses the technology to “aggregate, transform, and present news” in the most factual way possible, without any sensationalism or bias.
“The current media landscape and its advertising model encourage publications to use sensationalist language to drive traffic,” a representative at Boring Report told Fortune in an email. “This affects the reader as they have to parse through emotionally charging, alarming, and otherwise fluffy language before they get to the core facts about an event.”
Reached #6 on the Magazines & Newspaper section of the App Store today! Thank you, everyone, for the support! We will continue to work hard to get you updates and new features pic.twitter.com/9Qr77rWB9X— Boring Report (@boringreport) May 8, 2023
On its website, as an example, Boring Report juxtaposed a fictional and hyperbolic headline, “Alien Invasion Imminent: Earth Doomed to Destruction” with one that it would write, “Experts Discuss Possibility of Extraterrestrial Life and Potential Impact on Earth.”
Boring Report told Fortune that it doesn’t claim to remove biases, but rather its goal is simply to use A.I. to inform readers in a way that removes “sensationalist language.” The platform uses software from OpenAI, a Silicon Valley–based company, to generate summaries of news articles.
“In the future, we aim to tackle bias by combining articles from multiple publications into a single generated summary,” Boring Report said, adding that currently humans don’t double-check articles before publishing them and that humans only review them if a reader points out an egregious error.
The service publishes a list of headlines and includes links to original sources. For instance, one of the headlines on Tuesday was “Truck Crashes Into Security Barriers Near White House,” which links back to the source article on NBC titled “Driver arrested and Nazi flag seized after truck crashes into security barriers near the White House.”
Tools like OpenAI’s A.I. chatbot ChatGPT are increasingly being used in various industries to do jobs that were once performed exclusively by human workers. Some media companies, under intense financial strain, are looking to tap A.I. to handle some of the workload and help them become more efficient.
“In some ways, the work we were doing towards optimizing for SEO and trending content was robotic,” S. Mitra Kalita, a former executive at CNN and cofounder of two other media startups, told Axios in February about how newsrooms use technology to identify widely discussed subjects online and then focus stories on those topics. “Arguably, we were using what was trending on Twitter and Google to create the news agenda. What happened was a sameness across the internet.”
Newsrooms have also already begun experimenting with A.I. For instance, BuzzFeed said in February it would use A.I. to create quizzes and other content for its users in a more targeted fashion.
“To be clear, we see the breakthroughs in A.I. opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good,” BuzzFeed CEO Jonah Peretti wrote in January before the launch of the outlet’s A.I. tool. While the company uses A.I. to help improve its quizzes, the tech doesn’t write news stories. BuzzFeed eliminated its news division last month.
Some media company experiments with A.I haven’t gone well. For instance, some articles published by tech news site CNET using A.I.—with disclosures that readers had to dig for to see—included inaccuracies.
Amid the quest to change how news is written and packaged is a fear that A.I. will be misused or used to create spam sites. Earlier this month, a report by NewsGuard, a news rating group, found that A.I.-generated news sites had become widespread and were linked to spreading false information. The websites, which produced a large amount of content—sometimes hundreds of stories daily, rarely revealed who owned or controlled them.
Boring Report, launched in March, is owned and backed by two New York–based engineers—Vasishta Kalinadhabhotla and Akshith Ramadugu. The free service is also supported by donations and was recently ranked among the top five downloaded apps under the Magazines & Newspapers section of Apple’s App Store. Representatives at Boring Report declined to share specifics regarding user numbers, but told Fortune that they planned to launch a paid version in the future.
But what’s fueling the new crop of A.I. media platforms is clear to NewsGuard CEO Steven Brill: Readers lack mainstream news outlets they trust. And yet the rise of A.I. news has made it especially challenging to find genuine sources of information.
“News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source,” Brill told the New York Times. “This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.”