ARL conducts decision-making AI research
A new framework for neural networks’ processing enables artificial intelligence to better judge objects and potential threats in hostile environments. Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and university partners from the Internet of Battlefield Things Collaborative Research Alliance, or IoBT CRA, developed a method for neural networks to be more confident in their understanding of battlefield environments, ARL announced August 9.
To achieve this, researchers reviewed frameworks to represent uncertainty, categorized sources of uncertainty in military information-networks’ common operating environment, and most importantly created solutions to manage uncertainty within systems.
The researchers developed insights from the uncertainty management approaches into a workflow that maximizes effectiveness in accomplishing mission goals despite the presence of uncertainty in data inputs. Through this process, they teach neural networks when to say, “I am sure,” and be right about it.
This improved confidence in neural networks has significant implications for the battlefield, as certainty in AI conclusions and behaviors is paramount to ensure ethical and effective decision-making autonomy in combat.
“Modern defense applications, like Aided Target Recognition, increasingly leverage advances in AI to enhance automation of various battlefield functions” said Dr. Maggie Wigness, Army researcher and deputy collaborative alliance manager of the IoBT CRA. “A key component of improving automation is to improve machine confidence in understanding its environment, so that the machine can exercise ‘good judgment.’”
Older intelligent-system technologies often relied on approaches that were well-understood by engineers to deliver answers, but the rise of AI in general, and neural networks in particular changes that.
“Older data fusion technologies like a green circular radar screen, resembling those often shown in older movies, would show targets as dots bleeping on the screen,” said Dr. Tarek Abdelzaher, a professor at the University of Illinois and the academic lead of the lab’s IoBT CRA. “Operators knew something was approaching because they could see the dots and knew what a dot meant.”
Tomorrow’s operating environment will be filled with smart autonomous devices and platforms that create diverse and complex information signatures. “AI can pick up the data from these complex information signatures, but the logic that connects those signals to a conclusion such as, ‘this is a target,’ is a lot more complicated and difficult for the machine to indicate to the operator,” he said.
Because of more subtle signals that operators may not understand, it is no longer always clear why a data fusion system thinks an item is, for example, a tank versus a civilian, nor is it always clear how confident the system is in its assessment.
The researchers address this through their paper, On Uncertainty and Robustness in Large-Scale Intelligent Data Fusion Systems, published in the 2nd IEEE International Conference on Cognitive Machine Intelligence, and through solutions developed in the IoBT CRA, which are helping to enable unconstrained command and control of complex, intelligent, pervasive systems-of-systems in modern battlespaces.
“Uncertainty measurement and mitigation for machine learning frameworks is just one example of how the IoBT CRA is providing the resiliency needed to produce a fighting network,” Wigness said. “We know that the Army’s multi-domain operational environment is going to be highly dynamic and contested, which is why one of the main research focus areas of the program is on generating scientific contributions that directly address resiliency and robustness.”
Source: ARL