Explainable AI


Another big week in Creative AI! The announcement of the Lumen Prize for Art and Technology, A discussion pondering if “Robots could ever become artists” at the Science Museum in London, plus the release of a new book on AI-powered creativity. At that’s before we get to today’s topic: does AI need to be able to explain how it arrived at a decision? Join podcast regulars Jon, Nina and Dilpreet along with our new special guest star, Computational Creativity researcher Professor Simon Colton to find out more.

Creative AI Podcast Episode 15: Explainable AI

Listen and subscribe to our podcast here!

As usual, the team are well prepared when they discuss this year’s Lumen Prize, which this year featured a session on AI and Art and a recent panel discussion at the Science Museum in London asking if “Robots could ever become artists?”. We also briefly talk about a new book by Arthur Miller, “The Artist in the Machine” (more details in a future podcast when at least one person has actually read a copy). In a bizarre twist, we also find out about the AI Simon dreamed about when he was a boy.

The main topic of this episode comes from a listener, Ben, who asks about “Blackbox AI”: “Machine learning has become a very hot area in medical diagnostics, but the algorithms can’t meaningfully explain how they reached their conclusions.” How important is it for AI and automated decision making to explain how it arrived at a decision? Listen to this week’s podcast to find out.

Listen on Spotify

Listen on iTunes

Discuss this Article on Twitter

GPT2 1.5B Creative Directions Festival 2019