MENU

Building Trust in Artificial Intelligence Predictions

January 28, 2018 • EMERGING TRENDS, Digital Transformation

By Vyacheslav Polonski and Jane Zavalishina

Whether you like it or not, we will soon all rely on expert recommendations generated by artificial intelligence (AI) systems at work. But out of all the possible options, how can we trust that the AI will choose the best option for us, rather than the one we are most likely to agree with? A whole slew of new applications is now being developed that try to foster more trust in AI recommendations, but what they actually do is training machines to be better liars.

 

Have you ever wondered what it would be like to collaborate with a robot-colleague at work? With the AI revolution looming on the horizon, it is clear that the rapid advances of machine learning are poised to reshape the workplace. In many ways, you are already relying on AI help today, when you search for something on Google or when you scroll through the Newsfeed on Facebook. Even your fridge and your toothbrush may be already powered by AI.

We have come to trust these invisible algorithms without even attempting to understand how they work. Like electricity, we simply trust that when we turn on the light switch, the lights will go on – no intricate knowledge of atoms, energy or electric circuits is necessary. But when it comes to more complex machine learning systems, there seems to be no shortage of pundits and self-proclaimed experts who have taken a firm stance on AI transparency, demanding that AI systems are first fully understood before they are implemented.

At the same time, big corporations are already eagerly adopting new AI systems to deal with the deluge of data in their business operations. The more data there is to collect and analyse, the more they rely on AI to make better forecasts and choose the best course of action. Some of the most advanced AI systems are already able to make operational decisions that exceed the capacities of human experts. So whether we like it or not, it’s time to get ready for the arrival of our new AI colleagues at work.

 

The Watson Dilemma

As in any other working relationship, the most essential factor for a successful human-machine collaboration is trust. But given the complexity of machine learning algorithms, how can we be sure that we can rely on the seemingly fail-safe predictions generated by the AI? If all we have is one recommended course of action, with little to no explanation why this course of action is the best of all possible options, who is to say that we should trust it?

This problem is perhaps best illustrated by the case of IBM’s Watson for Oncology programme. Using one of the world’s most powerful supercomputer systems to recommend the best cancer treatment to doctors seemed like an audacious undertaking straight out of sci-fi movies. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on the recommendations generated by Watson’s suite of oncology solutions.

However, when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given physicians some peace of mind, providing them with more confidence in their own decisions, but did not result in improved patient survival rates.

 
Please login or register to continue reading... Registration is simple and it is free!

About the Authors

Dr. Vyacheslav Polonski is a researcher at the University of Oxford and a member of the World Economic Forum Expert Network. He is also the founder and CEO of Avantgarde Analytics, a machine learning startup specialising in algorithmic campaigning. He holds a PhD in computational social science and is a frequent speaker at international conferences on AI accountability and governance.

Jane Zavalishina is the CEO of Yandex Data Factory – an industrial AI company belonging to Yandex, one of Europe’s largest internet companies. Jane is a regular voice at international events on AI-related topics. She also serves on the World Economic Forum’s Global Future Councils. Jane was recently named in Silicon Republic’s Top 40 Women in Tech as an Inspiring Leader.

You might also like:

Leave a Reply

Your email address will not be published. Required fields are marked *

« »