Approaches like those of the European Commission to trust AI in cybersecurity in general are misleading and dangerous," warns Mariarosaria Taddeo. (Credit: Fisher Studios)
Report: What is the threat situation regarding attacks in IT? Why could be the usage of AI needed there?
Mariarosaria Taddeo: The cybersecurity company Norse detected more than 4000 occurring cyberattacks per minute in 2014. One might think, the threat situation we are facing today improved, since companies in general have installed sophisticated cyber defense. But actually, we are not in a much better place. When we look at a World Economic Forum Global Risks Report, published in 2019, cyberattacks are ranking among the top five sources of severe global risks. Gemalto reports that in the first half of 2018 attacks compromised 4,5 billion records. That is almost twice the number of records compromised during the entire year of 2017. And a Microsoft study shows that 60 % of the attacks in 2018 lasted less than one hour and relied on new forms of malware. That tells us that cyber attacks and malicious activities keep growing and are getting more effective, faster and numerous. In this context AI can play a key role in terms of helping cybersecurity measures mitigating attacks.
Report: How do you define AI in cybersecurity systems?
Taddeo: We see a lot of hype here and the look and feel of AI certainly depends on the numerous methods and architectures which are used. I’m using a high level of definition of AI, which basically resorts to the old Turing approach to AI. It defines AI as a growing resource of interactive, autonomous and self-learning agency, which can be used to perform tasks that otherwise would require human intelligence to be executed successfully.
There are two important aspects of this definition. The first on has to do with the kind of intelligence we are considering. There is no space for science fiction here, we don’t talk about conscious machines. But the machines we are focusing on are behaving like they were conscious. They perform tasks that normally would require some sort of intelligence. AI here has no sense of ideas, intuition or creativity – attributes we would use to describe human intelligence.
The other element is the kind of technology we are dealing with. It is the first time in human history, that we have autonomous machines – not in a dramatical way but learning during its interactions it has. Those very qualities of AI make it such a great technology to perform for a lot of purposes. It also brings a lot of challenges – ethical and technical. That includes cybersecurity.
Report: In which ways could AI contribute to security products and services?
Taddeo: AI can help in many ways. I am focusing on three aspects: systems robustness, response and resilience. In terms of robustness AI can be used to identify bugs and open doors in systems. It can get certain jobs like verification or validation processes much quicker done. Those in general are often time consuming and tedious.
Then is AI already deployed to some extent to support systems responses to counter attacks. In 2016 at Cyber Grand Challenge organized by DARPA we have learned that AI systems can be played one against the other – to identify vulnerabilities in their own systems and in systems they are competing, and to develop attack strategies to take down opponents.
Finally we see a lot of products on the market that are using AI for system resilience. Normally it detects threats but can also monitor systems and define an average benchmark of the functioning of the system. If something is changing, the AI can spot an attack in the matter of hours – and not days. The UK based company Darktrace is already using machine learning methods to identify and quarantine compromised parts of systems which got attacked.
Those are all good news and it is the reason why there is a lot of pressure worldwide on the development of AI based products in the cybersecurity domain. This is the focus of the US executive order on AI, of the EU Commission Cybersecurity Act and several other emerging initiatives. The role of AI in cybersecurity is also stressed in the EU Commission Guidelines for AI. And there are number of reports of international organizations like IEEE which are working on the development of standards for AI in cybersecurity. All those initiatives have one element in common: the idea of pushing for trustworthy AI.
Report: What are the nature and challenges of trustworthy AI?
Taddeo: Trust in AI has a bit of a problem which goes back to the definition of the black box nature of AI. For the security of our society, for infrastructures and for individuals we need trust in technology – especially when we have little control and we so far cannot predict the outcome what AI delivers. Nor can we explain, how exactly a process based on machine learning produces a certain result.
Trust in general is not a very complex but a heterogenous concept. When you are asking doctors, psychologists, computer scientists or engineers, trust is defined in different ways. But we can simplify it: trust is a form of delegation with no control. It is linked to the assessment of the trustworthiness of the trustee. We take trustworthiness as a synonym of reputation but there are two elements here which are important to consider. To be trustworthy an agent needs to be predictable. We have to be sure that the agent will behave in the way we expect. And there has to be an analysis of an assessment of the risk we are taking, should the trustee behave differently. I could trust an AI assistant for image recognition purposes, because the risk would be quite low, if it fails to work correctly. But I would not trust it, if the risks are high for e.g. driving in an autonomous car. So trustworthiness always depends on a context.
Report: What kind of threats are AI systems facing themselves?
Taddeo: AI is a very fragile, vulnerable technology. AI systems can be the target of attacks such as in terms of data poisoning. Studies are showing, that with a minimal effort the outcome of an AI system can get changed on a large scale. In one case, with just 8 % malicious data in a system used to distribute drugs in a hospital, 75 % of the patients were given the wrong dosages.
Furthermore tempering of categorization models can AI systems easily trick to mistake one thing for another. In one study researchers were tweaking the pattern on a turtle shell to trick an AI into misidentifying it as a rifle.
Eventually there is the risk of backdoors in neuronal networks: Since this technology has no source code, it is quite difficult to detect backdoors once it is deployed. There are cases, in which a road-sign detection software gets the image recognition of a stop sign wrong, if someone sticks a post-it on the traffic sign. The software suddenly misinterprets the stop sign for a speed limit warning, thus autonomous vehicles do not stop at that crossroad. So, what if someone builds in such a backdoor with the purpose of gaining control of the outcome of the system at some point in the future?
Making AI robust is an effort we are seeing all over the world. It is part of standardization procedures. AI systems are not transparent, it is hard to understand what exactly determines a given outcome. The robustness of AI is a computationally intractable problem. It is not logically impossible but unfeasible to develop totally robust systems – because the number of possible perpetuations often is astronomically large. That is why approaches like the one of the European Commission which aims at Trusting AI in cybersecurity is conceptually misleading and operationally dangerous.
Because AI is a learning technology, the idea of trusting – delegating without control – is putting us in the wrong direction. The OECD principle is: AI systems must function in a robust, secure and safe way throughout theie life cycles and potential risk should be continually assessed and managed
Report: Should we better rely on usual software architectures, hard-written code?
Taddeo: AI can be relatively effective supporting cybersecurity tasks. But we should move from trust-based AI to reliable AI systems. That means, in procurement of new technology “AI as a service” is not an option: companies and the public sector should develop solutions including machine learning components in-house. And we are going to need standards for adversarial training of AI systems. At the moment we don’t have those in terms of the level of sophistication of trainings. In some context, especially when AI is deployed for critical infrastructure, high-level standards are most important.
And there is the need for parallel and dynamic monitoring, which means twin systems for constant benchmarking and controlling. If there is a divergence in the behavior of the two systems – the one outdoor in the wild and the other in the lab – we will be able to identify the problem and will be able to intervene.
Report: Do we need some sort of certification for AI algorithms that are used in sensible areas?
Taddeo: Yes indeed. But before certifications are possible, we need standards. The latter is the bigger challenge. Standards are driving markets and have political impact. AI systems which apply to certain standards will have access to those markets. So, it is important to decide, which standard we are going to work with and which values we want to embed. Technology gets adopted by its users, as soon as the system is aligned with the values of a society. If that is not the case, we miss a huge opportunity.
(Autor: Martin Szelgrad, Report Verlag)