Explainable Artificial Intelligence
Modern AI algorithms such as deep neural networks are often viewed as black boxes: although their performance is impressive, their inner workings and decision rationales are difficult to understand. This lack of transparency can lead to mistrust (or over-trust) of AI systems, hampering the ability of humans and machines to collaborate effectively. To meet this challenge, Kairos is teaming with researchers in academia to create new tools that automatically generate human-understandable explanations of an AI's "thought processes.” Armed with such explanations, a human operator can make informed judgments about whether to accept, modify, or overrule an AI’s decisions.