Pilots can no longer fly jetliners without computer assistance. There is too much data, there are too many parameters that must be constantly checked to ensure safety. The same is now true of cybersecurity. Expecting human beings to check every event in a system or on a network is foolhardy. The most dangerous threats to security are often the ones with the lowest profiles, the ones that people miss or gloss over.
Artificial intelligence allows a different approach. One that combines speed with systematic checking not just for outliers, but also for trends, connections, and probable threats. Machine learning programmes can sift through piles of data in minutes, where a human being might need days and still make potentially dangerous mistakes.
Does that mean AI can do it all? We have self-stocking fridges and self-driving cars. Why not self-protecting IT systems and networks? While this idea may sound attractive, it is by no means realistic. Huge strides in information technology hardware and software mean that AI is now a practical and affordable possibility for many organisations, either in-house or as a service. But relying totally on AI for cyber protection could be very risky, even fatal.
There are several reasons for this. First, even when AI is functioning well, it has limitations. Imagination, innovation, strategising, all these things are beyond the abilities of today’s AI. Second, AI needs periodic training to stay relevant and effective as the cyberthreat landscape changes. It is important to note that unsupervised machine learning programmes can detect new patterns in data without intervention. But assigning meaning and relevance to those patterns is a step still best accomplished by human beings. Third, smart cybercriminals can “game” or even poison AI being used for cybersecurity.
Just as it takes a pilot to tell a jetliner what to do and where do go, it takes a person, albeit a skilled person, to bridge the gap between AI’s results and globally effective cybersecurity. This should not come as a surprise. Human intelligence (humint) is a vital part of penetration testing, for example, as well as software testing. It takes the quirkiness of humans to explore avenues that AI may not know about or understand. And just as it takes a thief to catch a thief, it still takes a human security expert to get inside the mindset of an attacker.
So, AI and humans must work together. AI applies power and programming, while humint contributes judgment, imagination, and creativity. Meanwhile, AI continues to evolve. By building up vast banks of data on threats and best practices, AI develops its own faculties of judgment. It can describe, diagnose, and propose solutions for given threat situations. It starts to acquire human-like powers of insight. And possibly human-like problems as well. Researchers have already noted how inconsistencies can seem to arise spontaneously in very large software systems.
There may even be a natural limit to the level to which artificial intelligence can rise in cybersecurity. Above this level, they may become too much like human beings, losing their advantages of reliability and turning into entities that make the same mistakes as human beings – but faster. Only time will tell. For the moment, however, both AI and humint are essential, working together but each bringing their own strengths.
This integration of AI and Human Intelligence in cyber security can be best seen on the AI.saac platform and MDR Services from my organisation. You can read about the how it works here.