Steven Kenny, Architect & Engineering Program Manager EMEA at Axis Communications, writes an exclusive opinion piece for Tahawultech.com on how AI can help enterprises identify and mitigate vulnerabilities without additional human intervention.
The Middle East continues to be a hotspot for cybercrime. According to research by cybersecurity firm Group-IB, ransomware was the most serious threat to organisations, with companies in Saudi Arabia and UAE being the most targeted among Gulf Cooperation Council (GCC) countries between mid-2021 and mid-2022.[1] The region has also suffered several high-profile attacks during the past few years, with victims ranging from state news agencies to telecommunications providers, all experiencing major data breaches and disruptions to company operations.[2]
In the face of this, enterprises in the region must leverage cutting-edge solutions that reinforce their security resilience. Artificial intelligence (AI), something that is influencing all spheres of business activity, can help secure enterprises’ growing surface area of attack, and identify and mitigate vulnerabilities without the need for additional human intervention. As with any business change, part of deploying AI-driven solutions is having a robust strategy in place, one that considers the long-term feasibility and requirements of those solutions.
Threats of escalating severity
For many threat actors, cybercrime is a business like any other. As a result, they are inclined to adopt the latest trends and use the latest technologies to carry out their attacks. The various features of AI and machine learning (ML) that enterprises are starting to explore are the same features criminals are misusing.[3]
There are several examples of this. For instance, generative AI tools such as ChatGPT and Google’s recently launched Bard can provide criminals with marketing messages for phishing emails.[4] The use of these models has also lowered the cost and difficulty of carrying out such attacks, with threat actors using them to generate well-written communications specifically for Arabic-speaking countries.[5] AI automation tools can be used to create automated interactions with a large pool of potential victims. Algorithms trained on personal data can be used to build profiles of victims and prioritised lists, minimising the resources needed to do so while increasing the accuracy of attacks.
However, the misuse of AI goes beyond straightforward phishing attempts using ChatGPT. AI-powered malware can leverage advanced techniques to evade detection by security software and use metamorphic mechanisms to change operations based on the environment they’re in.[6] Consider DeepLocker, an AI-powered malware developed by IBM research as an experiment. It conceals its intent until it reaches a specific victim, potentially infecting millions of systems without being detected.[7] Enterprises need to stay one step ahead of malicious innovation like this, and they can do this by properly integrating AI-powered systems and countermeasures into their security strategies.
First responders
Having AI-enabled security systems requires an overhaul of organisations’ inner security workings. In other words, given the technological, legal, and ethical implications of those systems, companies need to provide adequate training and education for their security teams, as well as conduct due diligence with their respective IT suppliers and partners.
From there, the key factor is data. AI programmes can identify patterns, detect anomalies, and analyse vast amounts of data throughout an organisation’s network and infrastructure. This applies to infrastructure regardless of its scope and circumstance. Case in point, AI can be used to detect vulnerability in hybrid or remote environments where systems are decentralised.[8] These programmes serve as the “first responders” in countering any malicious activity, and they help organisations assume a more proactive, forward-looking risk posture.
AI is also a force for reducing organisations’ security workloads. For example, AI-powered automated patching can track and patch important software in real time and minimise potential exposure to threat actors.[9] Keep in mind, businesses should not become over-reliant on these systems, or leave them susceptible to data breaches. To avoid this, organisations must implement solid policies and guidelines regarding data access, monitoring, and analytics.
The Middle East needs to embrace the future
Overall, AI has the potential to deliver as much as $150 billion in value to GCC countries, equivalent to 9% or more of their combined GDP.[10] At the same time, organisations in those countries need to overhaul their security setups. Implementations of the technology may come with unwanted consequences but, by knowing how to best utilise it, companies can protect their systems and help usher in the next evolutionary stage of cyber resiliency.
[1] Saudi, UAE organizations prime targets of cybercrime in GCC (arabnews.com)
[2] Positive Technologies reveals 10 worst cyberattacks in the Middle East in the last 18 months (zawya.com)
[3] Exploiting AI: How Cybercriminals Misuse and Abuse AI and ML – Security News (trendmicro.com)
[4] Four Ways Criminals Could Use AI to Target More Victims (gizmodo.com)
[5] The Middle East cyber front line aiming to beat back AI-powered threats (arabnews.com)
[6] The rise of AI-powered criminals: Identifying threats and opportunities (talosintelligence.com)
[7] DeepLocker: How AI Can Power a Stealthy New Breed of Malware (securityintelligence.com)
[8] Evaluate the risks and benefits of AI in cybersecurity | TechTarget
[9] AI and cybersecurity: opportunities and risks – July 2023 – SA Instrumentation & Control
[10] State of AI in the Middle East’s GCC countries | McKinsey