“Trustworthy AI requires interdisciplinary collaboration”
Artificial intelligence and machine learning are key technologies for science, business, and society – but how transparent are their decisions? Andreas Krause, the 2021 recipient of the R?ssler Prize and Chair of the ETH AI Center, shares his thoughts on the opportunities and challenges with trustworthy artificial intelligence.
Mr Krause, you're one of Europe’s leading machine learning and artificial intelligence (AI) researchers. Are there tasks that you used to do yourself a decade ago but that you now delegate to intelligent computer programs?
Behind the scenes there are actually several very useful AI and machine learning technologies that make my day-to-day work easier. Searching through academic literature is greatly supported by recommendation engines, and speech recognition and language translation can, to a valuable extent, be automated today. That wasn't yet possible ten years ago.
Can artificial intelligence understand problems that humans have not yet understood?
It's hard to define what ‘understanding’ exactly means. Machines are capable of efficiently extracting complex statistical patterns from large data sets and utilizing them computationally. That doesn't mean in any way that they ‘understand’ them. Nevertheless, existing machine learning algorithms are still very useful for specialised tasks. It’s still a uniquely human ability, however, to generalize knowledge across domains and to quickly grasp and solve very different types of complex problems. We are very far away from achieving this in artificial intelligence.
What's your take on AI research at ETH Zurich?
We’re carrying out excellent AI research here at ETH, both in the Computer Science department and in many other disciplines. Especially in data science subfields like machine learning, computer vision and natural language processing, but also in application domains such as healthcare, robotics etc. Many of the most exciting questions pop up at the interface between different disciplines, so I see opportunities for working together systematically. That's why we established the ETH AI Center as an interdisciplinary effort and joined ELLIS, the European Laboratory for Learning & Intelligent Systems. Such networking is key. Going forward, we will only be able to influence AI and shape it according to European values if we take on a technological leadership role.
What do ‘European values’ mean in connection with AI?
That we're reflecting on how technological development impacts our economy and open society. For example, protecting personal privacy is an important value in Europe. This raises new questions about how to develop AI technology. Reliability, fairness and transparency play a key role here too, and they're connected to highly relevant questions about societal acceptance, inclusion and trust in AI.
What are the current challenges when working towards trustworthy AI?
AI and machine learning should be as reliable and manageable as conventional software systems, and they should enable complex applications that we can rely on. In my view, a great challenge lies in the fact that one can only assess the trustworthiness of AI in context of specific applications. Particular issues arising in medicine, for instance, can't be directly translated to issues involving the legal sector or the insurance industry. So we need to know the specific requirements of an application to be able to develop trustworthy and reliable AI systems.
What makes a machine learning algorithm reliable?
Reliability is a central issue when it comes to acceptance of new AI technologies. Again, the concrete requirements for reliability depend very much on each application. When a recommendation engine suggests a movie that someone doesn't like, the consequences aren't as far-reaching as when an AI system for medical decision support or an autonomous vehicle makes a mistake. Those kinds of applications require methods with much higher levels of reliability and safety.
And when mistakes creep in anyway?
"It will not be possible to completely avoid that machine learning algorithms make mistakes. People also make mistakes, but they learn from them and try to keep them to a minimum. This is equally important in machine learning."Andreas Krause
By systematically analyzing what kinds of mistakes are made, we can reduce them and prevent them from happening to the best of our ability. Here it's especially important for learning algorithms not to exhibit any unexpected behaviour.
What counts as unexpected behaviour?
For self-driving cars, for instance, there are image recognition systems that recognise road signs. Occasionally it's enough for someone to have put a sticker on a road sign, and then the machine recognition already stops working. Since humans wouldn't be thrown off by a sticker on a sign, this behaviour is very unexpected for us. Here we need to find new methods to reliably avoid these issues and to develop robust learning algorithms. These are currently very active research topics.
Sometimes you read about how machine learning needs to be explainable so that you can later trace back how and why an algorithm came up with a certain result.
Yes, exactly. It's also hard to define what 'explainable' means. This notion can only be made more concrete when considering a specific application. From my perspective, it's not mandatory to precisely understand how a machine learning system reached its decision. When we think about how people make decisions, we also don't know exactly which neurobiological factors were driving their decision-making processes. But people are able to explain their decisions in a transparent way. We need to find ways to mimic this ability in machine learning algorithms. Take the example of the road signs. Here you could, e.g., try to find out if a machine learning model focused on background characteristics or marginally important features instead of considering relevant properties of a road sign.
When a mistake is discovered, does the learning algorithm get adapted?
Yes. But in each case, we first must understand what the problem is so that the algorithm can be adapted to prevent the mistake from happening again.
"For learning processes that potentially have consequences for people, high ethical standards are imperative to ensure that the results are fair and do not discriminate against anyone."Andreas Krause
These discussions of artificial intelligence bring to mind a line from The Sorcerer's Apprentice by Goethe: “The spirits I've summoned – I can't get rid of them.”
Artificial intelligence is the source of both dreams and nightmares. This is also due to the influence of science fiction, Hollywood films and novels. As a researcher, I'm more concerned about current technologies being used blindly or abused, and the possible consequences that could arise from a lack of reliability or from discrimination. It's important, however, that we are not guided by fear. We must face these challenges head on. It's the only way to actively shape this technology and use it for the benefit of society.
What role do ethical issues play here?
For learning algorithms with potential consequences for humans, high ethical standards are mandatory to guarantee results that are fair and non-discriminatory. This will require collaboration across different disciplines. The answer to an ethical question cannot be purely technical in nature. A computer scientist can't decide on their own how to, in a generalised manner, develop a machine learning system that reaches fair decisions.
What advice would you give to students who want to work with artificial intelligence in the private sector?
The fundamentals, especially in mathematics and computer science, are extremely important. At the same time, you need to be open to new issues and to get involved in real-life projects with colleagues working on different kinds of applications. Ultimately you need to be able to keep your cool in a field that is developing at such a rapid pace. Rather than running after the latest AI trend, it’s better to take the time to think ahead and gain a broader perspective.
Andreas Krause
Andreas Krause is a Professor of Computer Science at ETH Zurich. He is also the Academic Co-Director of the external page Swiss Data Science Center, Chair of the ETH AI Center and Co-Founder of the ETH spin-off external page LatticeFlow. Krause played a key role in establishing the Data Science Master’s degree programme and the DAS in Data Science as well as ETH’s most popular lecture, Introduction to Machine Learning. In 2012 he received the Golden Owl from ETH Zurich’s student association in recognition of his excellence in teaching. Most recently, he won the R?ssler Prize, ETH Zurich’s most generous research endowment.
Interview series: "On the path to trustworthy Artificial Intelligence"
The ETH AI Center opened its doors in October 2020, a sign of ETH Zurich’s dedication to promoting trustworthy, accessible and inclusive AI systems. ETH News is publishing a series of interviews and portraits to highlight what these values mean to researchers at ETH Zurich.
To learn more about the latest research results in AI and its potential future implications, you might also want to watch these external page videos of the series "AI + X" featuring members of the ETH AI Center.