How Generative AI Is Fueling the Rise of Fake News and Online Fraud

  Rassegna Stampa
image_pdfimage_print

Opinions expressed by Entrepreneur contributors are their own.

Do you remember the viral image of Pope Francis walking the streets of the Vatican in a shiny white puffer? It took the public a while to spot small inconsistencies and finally verify that it was an AI-generated piece. Some people were deeply shocked by how realistic the photo looked. This is just one — relatively innocent — illustration of how AI can contribute to the spread of misinformation that slowly but steadily creeps into our reality. Needless to say, the consequences can damage individuals, businesses, and potentially, even stock markets.

Fraudsters increasingly implement AI

What makes AI so great and, at the same time, terrifying is the fact that the technology is accessible to almost anyone these days. From simple text or image-generating bots to highly sophisticated machine learning algorithms, people now have the power to create large volumes of realistic content at their fingertips. It’s a true goldmine for illegal activities of all sorts.

With the help of natural language generation tools, fraudsters can put out vast quantities of texts containing false information quickly and efficiently. These AI-generated articles with false or inaccurate data manage to find their way into major media relatively easily. In fact, it’s possible to create entire websites populated by fake news that drive massive organic traffic and, thus, generate massive ad revenue.

NewsGuard has already discovered 659 unreliable AI-generated news and Information Websites (known as UAINS) that cover content in 15 different languages. False information published on these websites can relate to fabricated events or misinterpret actual events. The range of topics is wide, covering current affairs, politics, tech and entertainment, etc.

Related: How AI and Machine Learning Are Improving Fraud Detection in Fintech

Voice phishing, or vishing, is another relatively new type of fraud that’s made easier thanks to AI-powered voice cloning technology. Scammers can copy the voices of pretty much anyone whose speech has been recorded, allowing them to impersonate trusted individuals such as government officials, celebrities, or even friends and family members. In 2021, more than 59+ million individuals in the United States were impacted by vishing attacks. And the numbers keep climbing.

On top of that, the Internet is flooded with convincing fake images and videos, known as deepfakes (remember the Puffer Pope?), which can be used to manipulate public opinion or spread misinformation at lightning speed. The tendency is alarming even at the government level – the impact of AI deepfakes on the upcoming US presidential election is being actively discussed by the media. From AI-fueled attack ads to manipulated video footage of political candidates, the potential for AI deepfakes to influence public opinion and undermine the integrity of the democratic process is a growing concern for policymakers and voters alike.

Related: Deepfakes Are Lurking in 2024. Here’s How to Navigate the Ever-growing AI Threat Landscape

Let’s not forget how AI makes it easier to steal an individual’s identity. Last year’s report from Sumsub shows that AI-powered identity fraud is on the rise, topping the list of popular fraud types. The research shows a whopping 10x increase in the number of deepfakes in the period between 2022 and 2023. The trend is present across various industries, with the majority of cases coming from the North American region.

The reality is that AI-enabled fraud and fake news are not a threat hanging exclusively over public figures with big influence. It can target private individuals and small businesses as well. Scammers can use AI-generated emails to impersonate official contacts to deceive people into revealing personal information or transferring money. Similarly, small businesses may fall victim to AI-generated fake reviews or negative publicity, which damages their reputation and affects their bottom line. Possible scenarios are endless.

Measures to combat AI-fueled fake news and scams are still insufficient

Solving the problem of AI-generated fakes has been a headache for platforms, media, businesses and governments for years now. Social media have employed algorithms and content moderation techniques to identify and remove fraudulent content. Fact-checking organizations work 24/7 to debunk misinformation. Regulatory bodies enact policies to hold perpetrators accountable.

Another big strategy is the tendency toward raising public awareness of the volumes of AI-generated content. Recently, both Google and Meta updated their AI deepfake policies. The platforms now require all displayed ads, including political ads, to disclose if they were created using AI.

And yet, nothing seems to be able to stop the wave so far. It’s becoming increasingly clear that combating AI-fueled fake news and fraud requires a multi-pronged approach. Enhanced collaboration between technology companies, government agencies, and civil society is essential to this process. Fostering media literacy and critical thinking skills among the public can also help individuals identify and resist manipulation tactics employed by fake news and scams. And, of course, we need to invest in research and development to stay ahead of evolving AI technologies used by fraudsters.

Related: A ‘Fake Drake’ Song Using Generative AI Was Just Pulled From Streaming Services

On top of that, developing more advanced AI algorithms capable of detecting and flagging fraudulent content in real-time is crucial. It seems a bit ironic that we employ AI to AI, but stranger things have happened.

Bottom line: we, as a society embracing artificial intelligence, have a long way ahead to effectively navigate the ethical, social, and technological challenges posed by the proliferation of AI-generated fake news and fraud. We’re sure to see a more widespread implementation of more stringent regulations and policies surrounding the use of AI in generating and disseminating information. For now, the best regular users can do is stay vigilant and double-check any information they encounter online, especially if it seems sensational or dubious, to avoid falling prey to AI-generated fake news and fraud.

https://www.entrepreneur.com/growing-a-business/how-generative-ai-is-fueling-fake-news-and-online-fraud/469006