AI makes ransomware attacks easy for budding cybercriminals, warns UK NCSC



summary
Summary

The UK’s National Cyber Security Centre (NCSC) warns that AI is likely to increase the global threat of ransomware over the next two years.

In a new report, the NCSC says AI is already being used for malicious cyber activity and is likely to increase the volume and impact of cyberattacks, including ransomware.

The UK agency predicts that AI will enable even relatively inexperienced attackers to access malicious AI methods more effectively, for example in the form of ‘GenAI-as-a-Service’. This would contribute to the global threat of ransomware.

AI is like the easy mode for cybersecurity

AI would primarily improve attackers’ social engineering capabilities, such as creating convincing phishing campaigns without the translation, spelling, and grammatical errors that often expose them.

Ad

Ad

AI’s ability to quickly aggregate data would also allow attackers to identify, examine, and extract valuable items, increasing the value and impact of cyberattacks.

AI could also aid in malware and exploit development, vulnerability hunting, and lateral movement by making existing techniques more efficient.

In the short term, these areas would remain dependent on human expertise, meaning that improvements would be limited to existing attackers who already have the skills. By 2025, the agency expects to see more sophisticated and new AI-driven cyber threats.

Extent of AI-induced skill growth over the next two years. | Image: NCSC

Ransomware remains the biggest cyber threat to UK organizations and businesses, according to the NCSC. Cybercriminals are constantly adapting their business models to maximize their profits.

“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” says NCSC CEO Lindy Cameron.

Recommendation

Bletchley Declaration, adopted at the UK-hosted AI Security Summit, announces a global effort to manage the risks of AI and ensure its safe and responsible development.

Generative social engineering

In addition to the NSCS, other organizations are warning about more convincing phishing emails created with generative AI.

A study by IBM has shown that AI-generated phishing emails can be almost as effective as those created by humans, with the major advantage that the creation process is much faster. The use of generative AI has reportedly already led to a significant increase in phishing attacks.

Although human-generated phishing emails still have a slight advantage because they can be more individualized and personalized, AI-generated attacks are almost on par with human-generated attacks. Another study shows that many people are susceptible to AI-generated phishing scams.

However, AI-based defense systems can help effectively detect and defend against cyberattacks. OpenAI, for example, supports a program to develop such systems and quantify the performance of AI models in cybersecurity.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top