PARC to develop Explainable Artificial Intelligence (XAI) science for DARPA
PARC, based in Palo Alto, CA announced on July 10 that it has been selected by the Defense Advanced Research Projects Agency (DARPA), under its Explainable Artificial Intelligence (XAI) program, to help advance the underlying science of AI. For this multi-million dollar contract, PARC will aim to develop a highly interactive sense-making system called COGLE (COmmon Ground Learning and Explanation), which may explain the learned performance capabilities of autonomous systems to human users.
The key idea behind COGLE is to establish common ground between concepts and abstractions used by humans and the capabilities learned by a machine. These learned representations would then be exposed to the human via COGLE’s rich sense-making interface, enabling people to understand and predict the behavior of an autonomous system.
For the DARPA project, COGLE will be developed using an autonomous Unmanned Aircraft System (UAS) test bed. However, concepts developed under COGLE could be applicable to understanding a variety of autonomous systems. In particular, COGLE might support user sense-making of autonomous systems’ decisions, enabling users to understand the strengths and weaknesses of autonomous systems, conveying an understanding of how systems may behave in the future, and provide ways for the user to improve performance of autonomous systems.
For COGLE, PARC is teaming with Carnegie Mellon University, West Point, University of Michigan, University of Edinburgh, and the Florida Institute for Human & Machine Cognition, bringing together the world’s top expertise in machine learning, human cognition, and user experience.
“The promise of AI is to design and build systems where humans and machines can understand, trust, and collaborate together in complicated, unstructured environments,” said PARC CEO Tolga Kurtoglu. “Today’s AI is about computation and automation, where machines are accomplishing amazing things like analyzing visual data, seeing patterns within billions of emails, looking at meta data to solve big problems, all within structured and repetitive sets of tasks. The future of AI is less about automation and more about a deep, transparent understanding between humans and machines.”
COGLE is one part of a larger research effort in the area of human-machine collaboration at PARC aimed at creating this future of AI. Machine learning systems and AI algorithms are increasingly able to solve complicated real-world tasks and moreover do so by learning on their own. Humans on the other hand have the ability to contextualize what they learn. When machines and humans work together, much can be accomplished. Users of autonomous systems cannot always know why a computing system made a decision, and may not always trust the system, which is a fundamental challenge in designing joint human-machine teams. This important project with DARPA, along with a suite of other human-machine collaboration projects at PARC, aim to enable people to test, understand, and gain trust into AI systems. This is especially important as we embark into a world of humans + AI systems, where we work together with sophisticated computing systems, such as autonomous vehicles.
“It’s time to take AI to the next step by building the science of how AI systems learn, how they apply policies to the learning, how they explain things to humans, and possibly even an understanding of its developed social intelligence and ethical judgment,” said Mark Stefik, research fellow at PARC and principal investigator of COGLE. “As we move into the future with this critical technology, it will be important, for example, for humans to understand why our autonomous car made a certain decision, and visa versa, so we can together intelligently maneuver with our self-driving car through difficult and unchartered environments.”
Source: PARC