Marines fooled a DARPA robot by hiding in a cardboard box while giggling and pretending to be trees
Former Pentagon policy analyst Paul Scharre wrote in his upcoming book that the Marines were training to defeat the AI system powering the robot.
The state-of-the-art robots used by the Pentagon had an easily manipulated weakness, according to an upcoming book by a former policy analyst: Though they're trained to identify human targets, the bots are easily fooled with the most lackluster of disguises.
In his upcoming book, "Four Battlegrounds: Power in the Age of Artificial Intelligence," former Pentagon policy analyst and Army veteran Paul Scharre writes that the Defense Advanced Research Projects Agency (DARPA) team trained its robots with a team of Marines for six days to improve its artificial intelligence systems.
Shashank Josi, defense editor at The Economist, posted several excerpts from Scharre's book on Twitter. In the passages, Scharre details how, at the end of their training course, the Marines devised a game to test the DARPA robot's intelligence.
Eight Marines placed the robot in the center of a traffic circle and found creative ways to approach it, aiming to get close enough to touch the robot without being detected.
Two of the Marines did somersaults for 300 meters. Two more hid under a cardboard box, giggling the entire time. Another took branches from a fir tree and walked along, grinning from ear to ear while pretending to be a tree, according to sources from Scharre's book.
Not one of the eight was detected.
"The AI had been trained to detect humans walking," Scharre wrote. "Not humans somersaulting, hiding in a cardboard box, or disguised as a tree. So these simple tricks, which a human would have easily seen through, were sufficient to break the algorithm."
Though it is unclear when the exercises in Scharre's book took place, or what improvements have been made to the systems since, DARPA robots' antics have long faced obstacles to their performance, including poor balance and concerns over their potential to cause accidental killings due to AI behaving in unpredictable ways.
"The particular experiment this appears to reference, part of our Squad X Core Technologies program, was an early prototype test to improve squads' situational awareness by detecting approaching people," a DARPA spokesperson said in an email to Insider. "In the next experiment, our team demonstrated a detection ability that exceeded expectations in a far more challenging environment. This is the nature of high risk and actively managed programs. We have not read the book but, in general, we are constantly testing and experimenting with new technologies."
The spokesperson added: "There is certainly still much work to be done to advance AI capabilities and we've seen tremendous promise in its national security applications."
Some of the technologies DARPA has recently backed include disease detecting sensors to be injected under the skin, a nuclear-powered rocket that could take humans to Mars, and AI-powered unmanned drone swarms.
Scharre did not immediately respond to Insider's requests for comment. "Four Battlegrounds: Power in the Age of Artificial Intelligence" will be released on February 28.