Toggle light / dark theme

Space Force members will be known as “Guardians” from now on, Vice President Michael R. Pence announced Dec. 18.

“Soldiers, Sailors, Airmen, Marines, and Guardians will be defending our nation for generations to come,” he said at a Dec. 18 White House ceremony celebrating the Space Force’s upcoming birthday.

As the Space Force turns 1 year old on Dec. 20, abandoning the moniker of “Airman” is one of the most prominent moves made so far to distinguish space personnel from the Air Force they came from. An effort to crowdsource options brought in more than 500 responses earlier this year, including “sentinel” and “vanguard.”

Popular media and policy-oriented discussions on the incorporation of artificial intelligence (AI) into nuclear weapons systems frequently focus on matters of launch authority—that is, whether AI, especially machine learning (ML) capabilities, should be incorporated into the decision to use nuclear weapons and thereby reduce the role of human control in the decisionmaking process. This is a future we should avoid. Yet while the extreme case of automating nuclear weapons use is high stakes, and thus existential to get right, there are many other areas of potential AI adoption into the nuclear enterprise that require assessment. Moreover, as the conventional military moves rapidly to adopt AI tools in a host of mission areas, the overlapping consequences for the nuclear mission space, including in nuclear command, control, and communications (NC3), may be underappreciated.

AI may be used in ways that do not directly involve or are not immediately recognizable to senior decisionmakers. These areas of AI application are far left of an operational decision or decision to launch and include four priority sectors: security and defense; intelligence activities and indications and warning; modeling and simulation, optimization, and data analytics; and logistics and maintenance. Given the rapid pace of development, even if algorithms are not used to launch nuclear weapons, ML could shape the design of the next-generation ballistic missile or be embedded in the underlying logistics infrastructure. ML vision models may undergird the intelligence process that detects the movement of adversary mobile missile launchers and optimize the tipping and queuing of overhead surveillance assets, even as a human decisionmaker remains firmly in the loop in any ultimate decisions about nuclear use. Understanding and navigating these developments in the context of nuclear deterrence and the understanding of escalation risks will require the analytical attention of the nuclear community and likely the adoption of risk management approaches, especially where the exclusion of AI is not reasonable or feasible.

One good reason for the rarity of radical designs is the enormous expense of the research. Engineers can learn only so much by running tests on the ground, using computational fluid-flow models and hypersonic wind tunnels, which themselves cost a pretty penny (and simulate only some limited aspects of hypersonic flight). Engineers really need to fly their creations, and usually when they do, they use up the test vehicle. That makes design iteration very costly.

Artificial intelligence helped co-pilot a U-2 “Dragon Lady” spy plane during a test flight Tuesday, the first time artificial intelligence has been used in such a way aboard a US military aircraft.

Mastering artificial intelligence or “AI” is increasingly seen as critical to the future of warfare and Air Force officials said Tuesday’s training flight represented a major milestone.

“The Air Force flew artificial intelligence as a working aircrew member onboard a military aircraft for the first time, December 15,” the Air Force said in a statement, saying the flight signaled “a major leap forward for national defense in the digital age.”

The Energy Department and National Nuclear Security Administration, which maintains the U.S. nuclear weapons stockpile, have evidence that hackers accessed their networks as part of an extensive espionage operation that has affected at least half a dozen federal agencies, officials directly familiar with the matter said.

On Thursday, DOE and NNSA officials began coordinating notifications about the breach to their congressional oversight bodies after being briefed by Rocky Campione, the chief information officer at DOE.

They found suspicious activity in networks belonging to the Federal Energy Regulatory Commission (FERC), Sandia and Los Alamos national laboratories in New Mexico and Washington, the Office of Secure Transportation at NNSA, and the Richland Field Office of the DOE.

The French military is starting exploratory work on the development of bionic supersoldiers, which officials describe as a necessary part of keeping pace with the rest of the world.

A military ethics committee gave its blessing to begin developing supersoldiers on Tuesday, according to The BBC, balancing the moral implications of augmenting and altering humanity with the desire to innovate and enhance the military’s capabilities. With the go-ahead, France joins countries like the U.S., Russia, and China that are reportedly attempting to give their soldiers high-tech upgrades.