Dr. Ernesto Damiani explores leveraging artificial intelligence (AI) to enhance network security and ongoing work at Khalifa University to remove obstacles in the way of AI police for cybersecurity.
The ongoing evolution of telecommunications infrastructures towards future networks and 5G (i.e., the 5th generation of networks) amounts to a complete “digital transformation”. The country’s telecommunications infrastructure, which most people still envision as composed of antennas, physical cables, cabinets and racks, has become a (largely software) platform where millions of virtual processors, storage units and switches are connected via transparent, ultra-broadband “fabrics”.
Unfortunately, this environment is an ideal target for stealth attackers and cyber criminals. Traditionally, security experts “patrolled” computer networks by sitting in front of large screens displaying traffic, trying to observe and detect anomalies. Today, human perception simply cannot accommodate the fantastic complexity of traffic on the global communication infrastructure. Identifying and countering threats requires applying Artificial Intelligence (AI) to network security.
Network operators plan to deploy AI components capable of “seeing” and “interpreting” the state of millions of network entities via the real-time analysis of the data streams they produce. From a cyber security point of view, these AI components will play the role of a “network police” – they will set up traps for cyber criminals, and perform proactive policing actions, based on the early-warning signals of attacks. They will even create “virtual jails” for fast confinement (e.g., quarantine, segregation) of suspect traffic.
How does this work? It all starts from the highly-dimensional, multi-layer data flows generated in real-time from network virtual resources and from our smartphones. Data in these flows represent values of many attributes describing virtual entities, like, for example “connection”.
Exploring further the connection example, see how it applies to traffic. The AI police can quickly classify connections according to their risk level and then jail the most “risky” ones or re-route them where they cannot do harm. In the background, AI agents will also learn from their experience to better assess risk levels, transparently tuning the behaviour and performance of the whole network’s analysis and detection process.
Another example, more related to the physical world, concerns video surveillance. Once mobile cameras become widespread, the countless video flows they generate can hardly be transmitted to some remote place for analysis. Indeed, no wireless network can deliver the required low latency everywhere to so many cameras. Instead, network AI will have to step in to help, dynamically deciding where to host the execution of a computer vision algorithm depending on current network latency and congestion.
But why is this AI revolution happening only now? Simple, the AI agents are getting far better training than ever before and the hardware on which the training takes place is increasingly powerful. The potential of AI techniques, like neural networks, for classifying complex objects was recognised in the 90s, but for a long time, training AI classifiers has been more of an art than a science. Today, thanks to a new generation of training approaches, the training of classifiers can be performed quickly and modularly (e.g. part on smartphones, part inside the network).
In our work at the Khalifa University’s new Research Center on Cyber-Physical System, we envision a trend toward a collaborative AI pipeline for cyber security (acquisition, preparation, pre-processing, and analytics of data) that will create a collaboration of services, where each service can belong to a different operator. This scenario, however, requires us to address two major research challenges.
Firstly, data generated by smartphones or other network devices has a faceted structure; some features are collected as numerical data, video images or audio, while others are gathered by hardware sensors like the accelerometers found in smartphones. This structure can and should be exploited in the AI learning strategy. Techniques called multi-view learning treat input data facets (called views) differently, e.g. using multiple classifiers that collaborate in their training. These classifiers are then combined linearly or non-linearly to improve their performance.
Secondly, networks are owned and managed by multiple operators, each with its own interests and agenda; therefore, we cannot rely on full mutual trust between the AI modules, and need to enforce privacy and data protection. To achieve this goal, AI classifiers need to incorporate adversarial learning, which deals with data where features may have diverse veracity, due to the presence of un-trusted or semi-trusted components. The adversarial paradigm considers the data preparation/gathering as inherently including a source of noise and trains classifiers considering the uncertainty type and the corresponding uncertainty principles.
Overall, the exciting frontier of cyber security is making AI capable of playing the roles of network police and homeland security at the same time. Organisations and countries that, like the UAE, utilise AI technology in their infrastructure and take full advantage of actionable Big Data analytics, will have a stronger defence. Khalifa University’s information security research is supporting that effort.