Advanced Computing in the Age of AI | Tuesday, March 19, 2024

Evolving AI Debate Shifts to the Battlefield 

Burlingham/Shutterstock

The accelerating pace of AI development continues to attract the attention of policy wonks who simultaneously view the technology as a strategic asset while worrying about unforeseen consequences of using the technology in war.

The Washington-based Center for a New American Security is the latest think tank to weigh in on the nettlesome questions of how, where and in what form artificial intelligence should be deployed on the battlefield. “Automation is used heavily in cyber security and defense applications,” the group said in announcing a task force on AI and U.S. national security.

“As AI improves, machines will be capable of handling more sophisticated tasks in more complex environments, sometimes aiding human decision-making and sometimes operating autonomously,” the group said Thursday (March 15).

The center joins other security groups highlighting the promise and perils of AI. Earlier this month, the Center for Strategic and International Studies released a report focusing on the less hyperbolic phrase “machine intelligence.” While emphasizing the strategic importance of the technology, the CSIS study also addressed growing concerns about the military application of machine learning, acknowledging the need to “manage public anxiety.”

The new AI task force is led by Robert Work, former deputy defense secretary, and Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. Work was among a group of senior Pentagon officials during the Obama administration promoting a DoD technology initiative called the Third Offset Strategy designed to field next-generation technologies. The strategy highlighted what Work called U.S. “pacing competitors,” a reference to China and Russia.

China’s national AI strategy released last August calls for catching and surpassing the U.S. by 2025.

Of greatest concern to AI critics is the emergence of robotic weapons with varying degrees of autonomy, including scenarios in which drones could, for example, identify and attack targets. Critics note that ethical and safety standards are needed to ensure that humans remain in the control loop.

(Still, simple countermeasures to autonomous weapons may already be available.)

The task force’s research agenda echoes some of the ethical and safety issues highlighted in earlier studies. Among them is the age-old conundrum of using new technologies for good and evil. Hence, the task force will explore the questions, “To what extent are AI technologies inherently dual use?” And, “What will influence the relative applicability of AI to the military and commercial sectors?”

A separate report released in February by AI researchers affiliated with the U.K.-based Future of Humanity Institute was among the first to address the dual-use nature of AI and machine learning. “It is clear that AI will figure prominently in the security landscape of the future, that opportunities for malicious use abound and that more can and should be done” to mitigate those dangers, the report warned.

AI developers must “take the dual-use nature of their work seriously,” the authors warned.

Despite the expanding policy debate over the future direction of AI—a debate that seeks to keep pace with rapid growth of AI and machine learning technology along with broad availability of development tools—industry executives note that machine learning remains in its infancy.

“This stuff is still essentially magic,” Eric Schmidt, former Google (NASDAQ: GOOGL) executive chairman, noted during a recent AI security summit. “The scientists that are working on it cannot for example explain certain failure modes. We don’t really exactly understand how the learning occurs.”

Among the earliest military applications of AI and machine learning is a Defense Department initiative called Project Maven. Officially known as Algorithmic Warfare Cross-Functional Team, the effort seeks to accelerate DoD’s integration of big data and machine learning into its intelligence operations. The first computer vision algorithms focusing on parsing full-motion video were released at the end of 2017.

“These algorithms, at least today, require a great deal of training data, Schmidt noted. “And when I say a great deal I mean like, millions of entries in the matrices, billions of pieces of data.”

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI