Inside the Mind of the Hacker: Report Shows Speed and Efficiency of Hackers in Adopting New Technologies

  Rassegna Stampa, Security
image_pdfimage_print

The application of artificial intelligence is still in its infancy, but we are already seeing one major effect: the democratization of hacking.

The annual Bugcrowd report, Inside the Mind of a Hacker 2023, examines the attitudes held and methods used by the Bugcrowd pool of bug hunters. This year, the report focuses on the effect and use of artificial intelligence (AI) by hackers.

It also provides valuable insight into how malicious hackers will employ AI. For now, this is centered around the use of LLM GPTs, such as ChatGPT. There are numerous ‘specialist’ GPTs appearing, but for the most part they are wrappers around the GPT4 engine. ChatGPT remains the primary tool of hackers.

Seventy-two percent of Bugcrowd’s hackers do not believe AI will ever replicate their human creativity. Despite this, 64% already use AI in their hacking workflow, and a further 30% plan to do so in the future. “I agree completely with the majority that [AI] will not replace the security researchers/hacker,” says Timothy Morris, chief security advisor at Tanium. “Hacking requires skill (AI has that) but also creativity that comes from understanding context (AI does not have that). While AI may get better over the years, I don’t see it as a replacement.”

Nevertheless, it is the combination of human creativity with AI workflow support that is changing the face of hacking – and while that is good in the hands of ethical hackers, it is concerning in the hands of malicious hackers.

According to the report, which analyzed roughly 1,000 survey responses from hackers on the Bugcrowd Platform, hackers are already using and exploring the potential of AI in many different areas. The top use cases are currently automating tasks (50%), analyzing data (48%), identifying vulnerabilities (36%), validating findings (35%), conducting reconnaissance (33%), categorizing threats (22%), detecting anomalies (22%), prioritizing risks (22%), and training models (17%). 

To achieve these ends, hackers have been treating AI as just another tool in their toolset. The first requirement is to understand the tool, and the second is to learn how to use it. With ChatGPT, this falls into two categories – understanding how its designated purpose can be used beneficially, and learning how to bend it to the hackers’ will elsewhere.

Advertisement. Scroll to continue reading.

The former is primarily used for report generation and language translation with speed and accuracy. (Malicious hackers are using the same capabilities in the production of compelling phishing campaigns.)

The latter is the use of prompt engineering to bypass ChatGPT’s filters that are designed to prevent malicious or illegal use of the tool. Prompt engineering is similar in concept to social engineering – while social engineering is the ability to persuade users to do something they shouldn’t, prompt engineering is the ability to persuade AI to do something it shouldn’t.

“Elite hackers view AI, not as a threat, but as a tool that augments their abilities, providing them with a competitive edge,” comments Craig Jones, VP of security operations at Ontinue. “Their perspective demonstrates a symbiotic relationship between hackers and AI, where AI complements and enhances the creative problem-solving skills that define hackers’ expertise.”

Casey Ellis, the founder and CTO of Bugcrowd, believes we should view the arrival of AI in the context of the history of hacking. Hackers are making use of what is available – just as they did with the arrival of Metasploit 20 years ago. It was designed for good but also used for bad. “Metasploit was a boon for hunters,” he said, “but they still had to understand how to use it and apply their own creativity to its application.”

There have been other tooling developments adopted by hackers since then – but AI has one fundamental difference. “Technologies that improve the efficiency of a hacker aren’t necessarily that easy for the layperson to understand,” he continued; “but with ChatGPT, everyone is talking about it and using it.” AI is equally available to ‘not yet hackers’ and non-technologists.

This could become more important given another finding in the report – hackers joining Bugcrowd are getting younger. The number of hackers aged 18 or younger has doubled over the last year, and 62% are aged 24 or younger.

Ellis noted that the approach to computing and security from youngsters today is different to previous generations. Today’s youngsters have grown up using technology, while previous generations had to learn about it. “When I learned about hacking, I was very much concerned with the plumbing of technology and the internet,” said Ellis. “But this under-18 cohort have had their experience of the internet abstracted to the extent they’re not necessarily aware that it’s even there. They don’t necessarily understand the technology, but they understand the interface and business logic and how to exploit the design of systems and applications in a way that I probably never will.”

AI will be able to explore business logic more easily than it can explore coding flaws in compiled applications. The overall effect is that AI will provide speed, scale, and efficiency to everyone, not just the traditional technologically adept hacker. AI introduces democratization to the hacker and potential hacker.

For now, most hackers are limited by the limitations of ChatGPT, but we should be aware that this may not always be the case. ChatGPT is trained on the content of the internet, and further learns from the data it receives through prompts. Imagine the hacking potential of a GPT trained by an adversarial nation state on the source code of target applications, and learning not from the general public but from its use by elite nation state hackers.

Bugcrowd’s Inside the Mind of the Hacker demonstrates the speed and efficiency of hackers in adopting new technologies to assist their hunting and improve their reporting. For now, that translates into scale – creatively found vulnerabilities weaponized and exploited with exponentially faster speed. It is likely to get worse.

Related: Biden Discusses Risks and Promises of Artificial Intelligence With Tech Leaders in San Francisco

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Malicious Prompt Engineering With ChatGPT

Related: Cyber Insights 2023 | Artificial Intelligence

https://www.securityweek.com/inside-the-mind-of-the-hacker-report-shows-speed-and-efficiency-of-hackers-in-adopting-new-technologies/