Home-Slide, Insight, News, Security

Positive Technologies: cybercriminals could soon use AI in over half of all cyberattack techniques

Positive Technologies has released an in-depth report examining the potential use of artificial intelligence in cyberattacks.

According to the report, AI could eventually be used by attackers across all tactics outlined in the MITRE ATT&CK matrix[1] and in 59% of its techniques.

Researchers note that previously, AI was used by cybercriminals in only 5% of all the MITRE ATT&CK techniques, while in another 17%, its use was proven feasible. However, with the rapid proliferation of legal AI tools, these numbers are expected to surge. Experts highlight that within a year of ChatGPT-4’s release, the number of phishing attacks increased by 1,265%, and they predict AI will continue to enhance the capabilities of cybercriminals.

Analysts believe that, amidst the rapid development of such technologies, developers of language models don’t do enough to protect LLMs[2] from being misused by hackers generating malicious texts, code, or instructions. This oversight could contribute to a surge in cybercrime. For example, hackers are already using AI to write scripts and verify code when developing malicious software. Moreover, LLMs enable novice cybercriminals, who lack advanced skills or resources, to accelerate the preparation and simplify the execution of attacks. This, in turn, contributes to the rise in AI-driven incidents. For instance, a cybercriminal can use AI to double-check for overlooked details in their attack plan or to explore alternative methods for executing specific steps.

Experts point to other factors driving the increased use of AI in cyberattacks. Among them is the weak cybersecurity infrastructure in developing countries, where even imperfect tools can be used effectively with the support of AI. Additionally, the ongoing arms race between attackers and defenders is pushing cybercriminals to use AI.

Roman Reznikov, Information Security Research Analyst at Positive Technologies, comments: “The advanced capabilities of AI in cyberattacks are no reason to panic. Instead, we must remain realistic, study emerging technologies, and focus on building result-driven cybersecurity strategies. The most logical way to counter AI-driven attacks is by leveraging even more efficient AI-powered defence tools, which can address the shortage of specialists by automating many processes. In response to the growing activity of cybercriminals, we developed the MaxPatrol O2 autopilot, designed to automatically detect and block attacker actions within the infrastructure before they can inflict irreparable damage on an organisation”.

Experts note that cybercriminals are already using AI to automatically generate malicious code snippets, phishing messages, and deepfakes, as well as to automate various stages of cyberattacks, including botnet administration. However, only experienced hackers currently have the skills to develop and create new AI-driven tools to automate and scale cyberattacks. Analysts predict that specialised modules will emerge in the near future to address specific tasks in well-known attack scenarios. Over time, these AI-driven tools and modules will likely merge into clusters, thereby automating attack stages and eventually covering most of them. If cybercriminals succeed in fully automating attacks on a specific target, the next logical step could be enabling AI to autonomously search for new targets.

To ensure personal and corporate cybersecurity, Positive Technologies recommends following general security rules, prioritising vulnerability management, and participating in bug bounty programs. Experts warn that the use of machine learning to automate vulnerability exploitation will enable cybercriminals to target organisations more quickly and frequently. Promptly addressing any detected flaws is crucial, particularly when publicly available exploits exist. To stay ahead of cybercriminals, vendors are increasingly integrating machine learning technologies into their products. For instance, MaxPatrol SIEM uses its Behavioral Anomaly Detection (BAD) component to assign risk scores to cybersecurity events and detect targeted cyberattacks, including those exploiting zero-day vulnerabilities. Similarly, PT Application Firewall uses AI for precise detection of shell upload attacks. MaxPatrol VM leverages AI for intelligent asset information searches and the creation of popular queries. PT NAD employs AI to generate custom profiling rules and detect applications within encrypted traffic. Finally, PT Sandbox uses AI for the advanced detection of unknown and anomalous malware.

[1]The MITRE ATT&CK matrix is a knowledge base developed and supported by the MITRE corporation based on analysis of real APT attacks. The matrix outlines the tactics and techniques that cybercriminals use to target corporate infrastructure.

[2]Large Language Models (LLMs) are advanced machine learning systems designed to process natural language using artificial neural networks.

Image Credit: Positive Technologies

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines