After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was “entirely false,” Reuters reported.
“Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described,” the cops’ Facebook post said. “It seems this ‘news’ outlet’s AI writes fiction they have no problem publishing to readers.”
It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn’t an isolated incident. According to Reuters, NewsBreak’s AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.
These misleading AI news stories have caused real harm in communities, seven former NewsBreak employees, speaking anonymously due to confidentiality agreements, told Reuters.
Sometimes, the AI gets the little details wrong. One Colorado food bank, Food to Power, had to turn people away after the app posted inaccurate food distribution times.
Other times, the AI wholly fabricates events. A Pennsylvania charity, Harvest912, told Reuters that it had to turn homeless people away when NewsBreak falsely advertised a 24-hour foot-care clinic.
“You are doing HARM by publishing this misinformation—homeless people will walk to these venues to attend a clinic that is not happening,” Harvest912 pleaded in an email requesting that NewsBreak take down the story.
NewsBreak told Reuters that all the erroneous articles affecting those two charities were removed but also blamed the charities for supposedly posting inaccurate information on their websites.
“When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content,” the company told Reuters.
NewsBreak outed source of fake shooting story
Dodging accountability is not necessarily a good look, but it’s seemingly become a preferred tactic for defenders of AI tools. In defamation suits, OpenAI has repeatedly insisted that users are responsible for publishing harmful ChatGPT outputs, not the company, as one prominent example. According to Reuters, NewsBreak declined to explain why the app “added a disclaimer to its homepage in early March, warning that its content ‘may not always be error-free.'”
Reuters found that not only were NewsBreak’s articles “not always” error-free, but sometimes the app published local news stories “under fictitious bylines.” An Ars review suggests that it’s likely that the app is also scraping news stories, perhaps written by AI, that also seem to use fictitious bylines.
NewsBreak told Reuters that “the inaccurate information” in the fake shooting story “originated from” a “content source,” as opposed to being hallucinated by AI.
The content source identified by NewsBreak is an article on a news site called FindPlace.xyz. It was written by a journalist named Amelia Washington, who has contributed most of the site’s most recent content. There is no public profile for Amelia Washington outside of the news site, and a reverse image search of the photo used with her bio suggests a stock photo was used. The same photo appeared on a testimonial for a nutritional supplement on Amazon and on posts for foreign freelance sites where her name and background do not match her FindPlace.xyz bio.
FindPlace.xyz did not respond to Ars’ request to connect with Washington or provide comment.
https://arstechnica.com/?p=2029287