Cybersecurity experts discuss the YouTube CEO deepfake

  ICT, Rassegna Stampa, Security
image_pdfimage_print

The likeness of YouTube CEO Neal Mohan has been leveraged in a recent phishing campaign that deploys AI-generated deepfake videos of the CEO to target content creators. These deepfakes are sent as private videos to targets in an effort to install malware, steal credentials or achieve a scam. 

Targets are sent an email, appearing to originate from an official YouTube email address, prompting them to view a private video that has been sent to them. The video features a deepfake of Mohan, which accurately impersonates his voice, appearance, and even mannerisms. Targets are prompted to click a link and input their credentials to confirm updated YouTube Partner Program (YPP) terms so they can continue to access all features and monetize their content. This allows the malicious actors to steal the users credentials. 

Below, cybersecurity experts share their insights on this scam. 

Security leaders weigh in

Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace:

The ability for attackers to use generative AI to produce deepfake audio, imagery and video is a growing concern, as attackers are increasingly using deepfakes to start sophisticated social engineering attacks. While the use of AI for deepfake generation is now very real, the risk of image and media manipulation is not new. The challenge now is that AI can be used to lower the skill barrier to entry and speed up production to a higher quality. Since the sophistication of deepfakes are getting harder to detect, it is imperative to turn to AI-augmented tools for detection as humans alone cannot be the last line of defense.

As threat actors adopt new techniques, traditional approaches to cybersecurity fall short. To combat emerging challenges from AI-driven attacks, organizations must leverage AI-powered tools that can provide granular real-time environment visibility and alerting to augment security teams. Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the loop modes, to accelerate security team response. Through this approach, the adoption of AI technologies — such as solutions with anomaly-based detection capabilities that can detect and respond to never-before-seen threats — can be instrumental in keeping organizations secure. 

J Stephen Kowski, Field CTO SlashNext Email Security+:

Generative AI and LLMs are enabling attackers to create more convincing phishing emails, deepfakes and automated attack scripts at scale. These technologies allow cybercriminals to personalize social engineering attempts and rapidly adapt their tactics, making traditional defenses less effective. What used to be zero-day are now zero-hour at least. Human defenders alone won’t be able to keep up.

To counter AI-generated attacks, organizations should deploy security solutions that leverage generative AI and use machine learning to detect anomalies in email content, sender behavior, and communication patterns. Implementing advanced anti-phishing technology that can identify and block sophisticated impersonation attempts in real-time is crucial for defending against these evolving threats. They should implement advanced email security with AI-powered threat detection, enable multi-factor authentication (MFA), and conduct regular security awareness training for employees. Leveraging real-time phishing protection that analyzes URLs and attachments can also significantly reduce deepfakes and other email-based threats.

James Scobey, Chief Information Security Officer at Keeper Security: 

Traditional identity threats to human users continue to evolve. Phishing attacks are becoming increasingly more targeted, using highly personalized tactics driven by social engineering and AI-enhanced data scraping. Cybercriminals are not only relying on stolen credentials, but also on social manipulation, to breach identity protections. Deepfakes are a particular concern in this area, as AI models make these attack methods faster, cheaper and more convincing.  As attackers grow more sophisticated, the need for stronger, more dynamic identity verification methods — such as MFA and biometrics — will be critical to defend against these increasingly nuanced threats.

Generative AI will play a dual role in the identity threat landscape this year. On one side, it will empower attackers to create more sophisticated deepfakes — whether through text, voice or visual manipulation — that can convincingly mimic real individuals. These AI-driven impersonations are poised to undermine traditional security measures, such as voice biometrics or facial recognition, which have long been staples in identity verification. Employees will, more and more frequently, get video and voice calls from senior leaders in their organization, telling them to grant access to protected resources rapidly.  As these deepfakes become harder to distinguish from reality, they will be used to bypass even the most advanced security systems.

Gabrielle Hempel, Security Operations Strategist at Exabeam: 

A lot of the early deepfake attacks we have seen involved audio impersonation only or manipulated footage that already existed. This is a worrying development because it involves a fabricated video that is pretty convincing and really shows the lengths to which people are going to make phishing more effective. 

Looking for inconsistencies in quality still seems to be the most effective way to spot deepfakes, although this is becoming harder as the technology gets better as well. Unnatural facial movements, words not matching the mouth, and background glitches are usually tell-tale signs. 

The barrier to accessing tools that allow for sophisticated attacks like these is becoming so low. It is both easy and affordable to do this, which makes it fair game for really anyone. Detection is really struggling to keep up with these attacks. There’s no great solution that will do it without human eyes on the footage, and even that is becoming less reliable.

https://www.securitymagazine.com/articles/101446-cybersecurity-experts-discuss-the-youtube-ceo-deepfake

Lascia un commento