[email protected]   tel:+44(0)1442 245030   twitter: @prpr_uk

At PRPR we make you stand out from the crowd. We combine commitment, experience and hard work with flair, innovation and enthusiasm to deliver cost-effective and creative online and offline public relations to technology-led companies – from start-ups to large corporations.

Avoiding blind trust in AI

Artificial intelligence is on a roll. While we are still on a hype cycle, AI is beginning to be part of our lives at work and home. But how far are we prepared to put our complete faith in AI systems. When we are in our autonomous car, will we trust it to make the right decision when faced with a potential collision or would we be happy for AI surgery system to make life and death choices.

 

What we need to build greater confidence in AI is a chance to ask – why did you do that? We need AI to explain itself.

 

That’s what they are trying to do at PARC research labs in the US where they are working to develop explainable artificial intelligence for the US Defense Advanced Research Projects Agency (DARPA).  PARC is aiming to develop a highly interactive sense-making system called COGLE (COmmon Ground Learning and Explanation), which may explain the learned performance capabilities of autonomous systems to human users.

 

According to PARC CEO Tolga Kurtoglu, “The promise of AI is to design and build systems where humans and machines can understand, trust, and collaborate together in complicated, unstructured environments. The future of AI is less about automation and more about a deep, transparent understanding between humans and machines.”

 

Maybe if AI systems are able to explain themselves we will trust our autonomous car to make the right decisions and even understand if it is possible to learn social intelligence and ethical judgment.

Leave a reply