Cyber security evolves every day with the development of new technology. The creation of Artificial Intelligence (AI) systems are becoming more and more sophisticated, to the point where Saudi Arabia has recently issued the intelligent robot, Sophia with real citizenship.
Artificial Intelligence (AI) is still mostly the stuff of science fiction. Elon Musk himself has worried about the possible threat that AI can pose. AI’s almost infinite capacity to learn and evolve means that the nation with the best AI systems could ‘rule the world’. In response, Sophia, whose AI has proven to be formidable, has already joked that Mr Musk need not be so alarmed over what seems to be a Hollywood film scenario. Nevertheless, the sophistication of AI systems does mean that those who can wield or create it will have a powerful tool in their hands.
So far, cyber attacks have been perpetrated by human hackers. These are people with the knowledge and tools that enable them to infiltrate, corrupt and steal data from organisations and individuals. By using AI, computers will be at the mercy of other computers who no longer need ‘pilots’ to steer their activities.
Cyber criminals are now developing ‘autonomous malware’. This kind of malware is AI that is coded to operate autonomously, allowing it to learn and decide who and how to hack. One of the biggest risks that AI hacking poses is its ability to adapt and learn the layout of the digital landscape it has hacked into. It can learn to ‘blend in’ to its system and evade detection by learning to mimic ‘normal’ data.
Some cyber security analysts are already envisioning some nightmare scenarios where AI can attack us in the most mundane of ways. AI can infiltrate and attack brain-computer interfaces, such as those used in medical science (like automated prosthetic limbs) and video game VR. In this manner, AI hacking doesn’t just pose a risk to business, but it can also pose a risk to privacy and human rights.
Nevertheless, if AI can be used as an efficient weapon, it is also a powerful form of defence. Soon, cyber warfare will further evolve with the use of AI. There will likely be less hacking by government agencies or hacking organisations, and a greater use of fighting AI attacks with equally or more sophisticated AI. Interested parties, from government intelligence to private enterprise will need to invest in hiring and training specialists in artificial intelligence instead.
The greatest advantage of using AI in shoring up one’s cyber defences is that AI will be able to monitor and report security breaches in real time. We normally hear about large breaches in IT systems after the fact. Dealing with cyberattacks is mostly a reactive affair in which analysts are only made aware of an infiltration after the damage has been done.
Using AI as a defensive measure against hacking means that an AI can learn to monitor a system in real time and it will also know what to do, and what to look for when the IT landscape doesn’t look normal. It can then take autonomous measures to correct or protect data as necessary, while also reporting the event.
The use of AI in cyber security is still relatively new. There aren’t yet AI programs sophisticated enough to engage in autonomous attacks on their own system. Nevertheless, now is a good time to start trying to understand the benefits and risks that the use of AI can pose when it comes to your company’s IT security.
Two key issues to think about in the future are:
Changing the way we think about cyber security – rather than treating it in purely military terms (eg. attack, breach, walls etc), IT experts should start learning from natural processes. Cyber security is becoming organic, and we should start treating our systems like living organisms (eg, immunity, antibodies, virus, etc). This way, we can build IT systems that are resilient, they can have ‘fevers’ but not completely shut down.
Consulting with dedicated AI safety experts and investing in new generation IT skills – it isn’t enough to be a software engineer or an information safety officer. AI is technology that thinks and learns like a human brain needs its behaviour to be understood. Skilled researchers who develop or study AI have holistic expertise in subjects like engineering and cognitive psychology.
Despite the possible risks that it can pose, AI can keep us safe in any aspect of our increasingly more technological lives.
For further information on the impact that AI may have on the future of security within your organisation, do not hestitate to contact Agilient.
The Agilient Team