Wider Impact of AI on the Threat Landscape

Alexander Harrison

Penetration Tester

Generative AI has transformed the digital world. Quality content can be produced in seconds; tedious customer service tasks can now be fully automated. Meanwhile, the government is turning to AI to alleviate the strain that repetitive jobs place on the public sector, such as identifying potholes in roads.

This has improved efficiency across industries while introducing significant cost savings and supporting workforces. However, with AI offering such support to legitimate organisations, it’s not surprising that criminals have also set their sights on the technology to support their malicious endeavours. We are seeing criminals turn to AI to generate highly sophisticated phishing and social engineering scams. This allows malicious actors to generate malicious emails at scale, creating content which is word, tone and image perfect, not exhibiting the usual telltale signs that an email may be coming from an untrusted source.

The uptick in AI-generated phishing and social engineering has also raised questions about how the technology can be leveraged to support other areas of cybercrime, such as generating malware, identifying vulnerabilities in software, and being used to scan infrastructure to discover exploitable Zero-Day vulnerabilities.


Big Sleep – Zero Day in SQLite

In November, Google announced that its AI agent had discovered a previously unknown Zero-Day vulnerability in the SQLite Database Engine. This marked the first publicly disclosed instance of AI independently discovering an exploitable Zero Day security flaw. Google’s Project Zero, renowned for its highly skilled team of security researchers, and DeepMind, Google’s AI research division, collaborated to develop Big Sleep, a Large Language Model-powered vulnerability detection agent.

This AI system successfully identified an exploitable stack buffer underflow in SQLite, a widely used open-source database engine. The vulnerability was reported to SQLite developers in October and was patched the same day, which meant that the flaw impacted no users. However, the discovery undoubtedly raised worrying questions.

If Google security researchers could find a Zero Day using AI, did this mean threat actors would soon be turning to the technology to do the same? Fortunately, for today at least, while Google’s research was undoubtedly a world first, it’s unlikely to spur criminals to turn to AI as part of their vulnerability hunting. The biggest challenge for attackers is the GPU power required to find Zero-Day vulnerabilities in code using AI.

Identifying patterns in millions of lines of code is computationally expensive, and it requires substantial resources to analyse large codebases, simulate attacks and optimise exploit strategies. This means that criminals would need specific infrastructure to hunt for large-scale Zero Days, infrastructure that global organisations like Google possess and can afford but is unlikely to be available to even the most sophisticated criminal gangs.

Criminals could use consumer hardware to scan for vulnerabilities, but it would be a slow process. They would likely achieve faster and better results, with greater financial returns, by using tools that are already on the market, such as Nmap, Masscan, or ZMap. While concerns about AI being used for malicious Zero Day exploitation persist, the more immediate threat still lies in AI-generated phishing and social engineering attacks.

AI enables criminals to craft deceptive emails and phishing messages at scale, making it harder for users to distinguish real communications from fraudulent ones. This allows criminals to work faster and more successfully, causing damage to more organisations while seeing improved financial returns. Although organisations and developers should remain vigilant about AI-driven vulnerability discovery, the current priority should be on strengthening defences against AI-enhanced phishing and social engineering attacks, as these present a far greater challenge in today’s digital landscape.

AI-enabled zero-day hunting could present a significant threat in the future. However, for now, criminals are more likely to continue focusing on their tried-and-tested methods, which still yield the desired results without overburdening their resources.

Next
Next

Key Lessons Learned in Cyber Incident Exercising