Does GPT-4 risk accelerating cybercrime?

  ICT, Rassegna Stampa, Security
image_pdfimage_print

Artificial intelligence is continuing to evolve and advance at a rapid rate. Its heightened integration into business models has grown, with large language models (LLMs) like GPT-4 (the one that is arguably dominating most of the recent discourse) continuing to illustrate possibilities and risks.

While the technological breakthrough of GPT-4 holds immense potential for streamlining tasks and augmenting human teams, it also raises several security concerns and risks accelerating cybercrime. A recent report suggested that 72% of CISOs believe AI solutions such as those powered by the GPT-4 LLM could lead to an enhanced number of severe security breaches.

This short guide looks at the potential threats posed by GPT-4 and its imminent successor GPT-5 which looks set to arrive by the end of summer 2024. Fundamentally, however, enterprise security leaders can take solace in the fact that these risks can be largely mitigated and contained with careful oversight, allowing them to harness the tangible benefits of this powerful, game-changing innovation.

AI poses challenges and opportunities

Like any transformative technology, AI is a double-edged sword. The rise of OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and Meta’s Llama (to name just a few) has paved the way for extraordinary business innovation and productivity enhancement. From streamlining arduous, manual tasks to generating content in a fraction of the time, AI chatbots have become an asset to businesses everywhere. 

On the other hand, recent insights suggest that the continued use of chatbots poses new security risks with each passing day, and the tool is now being leveraged to generate malicious code and programs that can compromise enterprise security data and assets en masse. Considering that ChatGPT currently has approximately 180.5 million global users and is now generating huge revenue for OpenAI and Microsoft, one has to wonder whether the security risks are falling on mostly deaf ears.

GPT-4: A powerful tool in the wrong hands

With an innate ability to understand requests and generate convincing text, images, code and other assets, GPT-4 can be leveraged for both bad and good. Cybercriminals can utilize tools like ChatGPT, for example, to quickly create malicious tools and deploy attacks.

GPT-4 can be used to generate convincing phishing emails, texts or social media messages, making them more likely to deceive users into divulging sensitive information. It can also generate code in any programming language quickly, allowing attackers to modify and develop sophisticated and covert malware, ransomware and other malicious programs that can lock systems down. It was recently reported that the notable open-source program PowerShell (which has GPT-4 integration) was used in over three quarters (76%) of ransomware incidents last year.

Misinformation and disinformation — a common social engineering tactic — are also made easier with the help of GPT-4, as is the ability to use it to bypass traditional security controls and content filters. A recent example saw CyberArk use ChatGPT to create polymorphic malware, which could mutate and move laterally, to evade filters in security infrastructure. Recently, generative AI was found to be the root cause of an identity and document fraud scam, which was believed to be the latest in a series of malicious tactics.

This only scratches the surface of what malicious actors can use GPT-4 for, and with its successor GPT-5 looming with promised advanced features, security leaders and organizations will need to up their vigilance and due diligence to spot AI-assisted cyberattacks more decisively and efficiently.

Harnessing GPT-4 for a security asset

Mitigating the risks of AI-powered cybercrime begins with taking a proactive, methodical approach to incumbent security strategies and minimizing their attack surface. Within the last 12 months alone, 95% of organizations have altered their cyber security and threat containment strategies in the wake of the evolving threat landscape.

While it can be easy to view GPT-4 as a problem waiting to happen, it’s prudent to recognize that this technology presents exciting and promising opportunities to fortify defenses and strengthen security postures.

Leveraging GPT-4’s capabilities ethically and responsibly can allow enterprise security teams to improve and automate many of their threat detection and response measures. GPT-4 can analyze and aggregate vast amounts of data, identify patterns and anomalies, and alert security teams to potential threats. By deploying automated threat detection, false positives may exist, but human cyber alert fatigue and error risks are significantly reduced.

The LLM’s innate ability to understand and generate code can be leveraged to remediate vulnerabilities in endpoints, networks and software, thus reducing the attack surface. Routine patch management processes are aided exponentially with the help of AI.

Data forensics are also augmented with GPT-4, with security logs, network traffic and other data sources analyzed rapidly to aid in incident response and investigations. Provided that human teams cast watchful eyes over data legitimacy and accuracy, an automated forensics function will save valuable hours and resources.

It’s evident that GPT-4 and indeed other LLMs represent a huge milestone in the evolution of generative AI technology. However, its potential for accelerating and worsening cybercrime activities should rightfully cause concern, and security leaders must act quickly to leverage this type of AI to counter and circumvent that which is used maliciously and fraudulently. An important takeaway from this is that the technology itself is not inherently meant to be malicious, but its malleability makes those types of activities possible. 

https://www.securitymagazine.com/articles/100654-does-gpt-4-risk-accelerating-cybercrime