Kairos Research was recently awarded a Phase 2 contract by the Air Force to develop novel methods for increasing the “explainability” of state-of-the-art machine learning algorithms. Together with their collaborators at Kansas State University’s Center for Artificial Intelligence and Data Science, the Kairos-led team will test a new approach for rendering the decision outputs of deep learning systems more understandable to human users. Specifically, the team will investigate whether background information contained in external knowledge graphs can be used to automatically explain the “reasoning process” of deep neural networks performing image classification.
Explainable artificial intelligence, or XAI, is critical for the development of safe and ethical human-in-the-loop AI systems. Furthermore, improvements in AI transparency may ultimately lead to the fielding of more competent AI systems, because human operators will be better able to spot brittleness in algorithms before deploying them more broadly. Such capabilities are not only a priority for the military but are poised to have broad impact throughout commercial industry.
The Kairos and KSU team’s cutting-edge research is funded by the Department of Defense Small Business Technology Transfer (STTR) Program and managed through the Air Force’s innovative AFWERX unit. The goal of the STTR program is to encourage joint ventures between small business and nonprofit research institutions, including universities.
The project is led by Dr. Brad Minnery, Kairos CEO.