The Good, the Bad and the Ugly of Generative AI
As humans, we’re naturally wired to be negative. It’s a widely studied concept referred to as negativity bias, and it’s not entirely a bad thing. Dr. Richard Boyatzis, Professor of Organizational Behavior, Psychology and Cognitive Science is quoted as saying, “You need the negative focus to survive, but a positive one to thrive.” This helps to explain the overwhelming number of gloom and doom articles about Generative AI, commonly understood as ChatGPT, Google Bard, Microsoft Bing Chat amongst others. But this observation also points to the opportunity we have to identify ways Generative AI can help us thrive.
A recent example from Vulcan’s Q2 2023 Vulnerability Watch report (PDF) helps provide some perspective on the good, the bad and the ugly of Generative AI.
- The good about ChatGPT is the ability to supplement human workflows and drive efficiencies. For example, it can be used to support software development, including providing recommendations for code optimization, bug fixing and code generation.
- The bad comes when ChatGPT uses old data and information it was trained on to make recommendations that turn out to be ineffective.
- Things start to get ugly when threat actors take advantage of bad data and gaps. In the absence of data and training, ChatGPT can start to freelance and generate convincing but not necessarily accurate information which Vulcan refers to as “AI package hallucination.”
However, it’s important to look at this with some historical context. Back in the early days of the internet bad guys figured out that people fat finger URLs so they would spoof a website with a misspelling and take advantage of people to infect their systems with malware or steal their credentials. Weaknesses in email usage and file sharing provided similar opportunities. Bad guys have always looked for gaps to exploit. It’s part of the natural evolution of technology that has led to innovation in cybersecurity solutions including anti-phishing tools, multi-factor authentication (MFA) and secure file transfer solutions. In the case of AI package hallucination, threat actors are looking for gaps in responses that they can fill with malicious code. This will undoubtedly spur more innovation.
Generative AI and security operations
Generative AI also holds great promise to transform security operations. We just have to look for ways to apply it for good and understand how to mitigate the bad and the ugly. Here are some best practices to consider.
Good: AI has a significant role in driving efficiency across the security operations lifecycle. Specifically, natural language processing is being used to identify and extract threat data, such as indicators of compromise, malware and adversaries, from unstructured text in data feed sources and intelligence reports so that analysts spend less time on manual tasks and more time proactively addressing risks.
Machine learning (ML) techniques are being applied to make sense of all this data in order to get the right data to the right systems and teams at the right time to accelerate detection, investigation and response. And a closed loop model with feedback, ensures AI capable security operations platforms can continue to learn and improve over time.
With Generative AI pushing even further, capabilities like learning from existing malware samples and generating new ones is just one example of creating outputs that can aid in detection and strengthening resilience.
Bad and Ugly: Security operations can take a turn for the worse when we start to think we can hand the reigns over to AI models completely. Humans need to remain in the loop because analysts bring years of learning and experience that ML and Generative AI must build over time with our help if they are to act as our proxy. More than that, analysts bring trusted intuition – a gut feeling that is out of scope for AI for the foreseeable future.
Equally important, risk management is a discipline that combines IT and business expertise. Humans bring institutional knowledge that needs to be married with an understanding of technical risk to ensure actions and outcomes are aligned with the priorities of the business.
Additionally, Generative AI is a horizontal technology that can be used in a wide variety of ways. Looking at its application too broadly may create additional challenges. Instead, we need to focus on specific use cases. A more measured approach with use cases that are built over time helps unleash the good while reducing any gaps that threat actors can exploit. Generative AI holds great promise, but it’s still early days. Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive one to thrive.”
https://www.securityweek.com/the-good-the-bad-and-the-ugly-of-generative-ai/