AI/Machine Learning applications based on neural network architectures and algorithms have a practical problem today. In many cases, these are black box systems that inherently provide little or no indication of how they arrived at the answer. Even genetic and rule-based systems may make it difficult to understand why a particular result was returned.
As people use intelligent applications for increasingly critical purposes, they must have trust that the application is making a correct decision. This trust is a key component of ethics in AI. If you're in a self-driving car, you are in effect placing your life in the hands of its data and algorithms.
We are nowhere near explainable AI today. But there are techniques that can be used to show a tracing of logic through a set of algorithms, or pull out key data that leads to a particular result.
Join Peter Varhol's Talk
22 October 11:00-12:00
This article examines the need for explainable AI
and ways that we can test to make sure
today that our applications are trusted by their users
|