Research observes threat activity targeting 2024 Presidential Election
Fortinet’s FortiGuard Labs has released research showing the current threats associated with the 2024 United States Presidential Election. The research presents an in-depth analysis of threats that may impact the electoral process, including:
- Phishing scams
- Ransomware activity
- Malicious domain registration
Security leaders discuss threats to the election
Casey Ellis, Founder and Chief Strategy Officer at Bugcrowd:
“As expected, the run-up into the 2024 Presidential Election is providing a predictably unstable information environment. This in turn creates a wide variety of options and opportunities for cyber-borne threats actors of all types and all motivations, and this report serves as a useful reminder that this will continue to escalate until (and beyond) election day itself.
“Of particular note is the volume of records available on the dark web in 2024. While it may be difficult to use these records to commit the kind of fraud or attacks that would directly modify the outcome of an election, it’s certainly a cheap and simple exercise to simply highlight the possibility of their use as a way to instill distrust in the democratic process, and to potential affect and manipulate voter turnout.”
Nick France, Chief Technology Officer at Sectigo:
“Primary security concerns around the 2024 Presidential Election include AI-driven misinformation dissemination, hacking of voter databases and tampering with voting machines. Preparation requires robust cybersecurity protocols, staff training and deploying AI-driven detection systems. AI-powered deepfakes and automated trolling pose significant risks, fueling misinformation, damaging reputations and undermining trust. Combating this threat necessitates developing AI-driven detection tools and promoting media literacy among the populace.
“Threat actors exploit AI for sophisticated cyberattacks on election infrastructure, often through AI-generated malware and automated phishing. Defensive measures require AI-powered threat detection, network monitoring and regular security audits. Specific election security risks could include impersonating leaders, fabricating content, swaying public opinion and eroding trust in democratic processes. Detecting and combating such disinformation demands AI-driven content analysis, collaboration with social media platforms and public awareness campaigns.
“Election officials and political campaigns need to be investing in AI-driven threat intelligence, conduct regular security assessments and enforce strict access controls. Fostering collaboration among government agencies and cybersecurity experts is essential for a coordinated response to emerging threats.”
Alex Quilici, CEO at YouMail:
“One of the most pressing challenges we face today is defending against election interference and scams that target voters through the telephone network — whether it is calls, texts or voicemails. These attacks are becoming increasingly sophisticated, especially as bad actors exploit vulnerabilities in communication systems to mislead and manipulate the public.
“The rise of AI and deepfake technology has taken these threats to a new level. We are not just dealing with generic robocalls anymore. AI can now create highly convincing voice attacks that make it sound like a trusted figure, such as a candidate, urging you not to vote or spreading false information. This kind of deception can seriously undermine public trust and disrupt the electoral process.
“Policymakers must respond by establishing mechanisms that allow for real-time monitoring, detection and swift action against these attacks. If, for example, a fake robocall goes out at 4 p.m. on Election Day, telling voters to stay home, authorities need to be able to shut it down immediately. Waiting days or even weeks to resolve the issue isn’t an option — by then, the damage is already done. Quick action is essential to safeguard the integrity of our elections and protect voters from manipulation.”
Mr. Narayana Pappu, CEO at Zendata:
“As we move steadfast into the home stretch of the Presidential Election cycle in 2024, there is an ongoing worry about fake content that is generated and distributed via various social media platforms. AI generated content, while not yet defining the elections as many feared, has brought into light important questions about balancing individual privacy with the need to protect against disinformation and online crime. Political campaigns now use AI tools like Natural Language Processing (NLP) and predictive modeling to analyze social media posts and comments for sentiment and topics of interest to anticipate voter behavior. With this, they are able to build detailed voter profiles by combining data from sources such as voter registration records, consumer databases and social media activity. Vulnerable groups, including minorities, the elderly, and digital natives, are at greater risk of being disproportionately targeted by these efforts.
“To stay safe, individuals should remain skeptical of political content, especially on social media, and be cautious of apps or surveys that request excessive personal information. They should also always cross-reference information with authoritative sources. Organizations can help mitigate risks by implementing robust content verification processes to combat disinformation. For example, adopting a Know Your Content approach, inspired by financial KYC regulations, could involve fingerprinting content and setting engagement thresholds that trigger additional verification when posts surpass a certain level of interaction. This combined with cross-platform collaboration is necessary to help stop the spread of misinformation and deepfakes, especially in this election season.”
https://www.securitymagazine.com/articles/101136-research-observes-threat-activity-targeting-2024-presidential-election