Avoiding blind trust in AI

Posted on

Artificial intelligence is on a roll. While we are still on a hype cycle, AI is beginning to be part of our lives at work and home. But how far are we prepared to put our complete faith in AI systems. When we are in our autonomous car, will we trust it to make the right decision when faced with a potential collision or would we be happy for AI surgery system to make life and death choices.

What we need to build greater confidence in AI is a chance to ask “why did you do that?” We need AI to explain itself.

That’s what they are trying to do at PARC research labs in the US where they are working to develop explainable artificial intelligence for the US Defense Advanced Research Projects Agency (DARPA).  PARC is aiming to develop a highly interactive sense-making system called COGLE (COmmon Ground Learning and Explanation), which may explain the learned performance capabilities of autonomous systems to human users.

According to PARC CEO Tolga Kurtoglu, “The promise of AI is to design and build systems where humans and machines can understand, trust, and collaborate together in complicated, unstructured environments. The future of AI is less about automation and more about a deep, transparent understanding between humans and machines.”

Maybe if AI systems are able to explain themselves we will trust our autonomous car to make the right decisions and even understand if it is possible to learn social intelligence and ethical judgment.