Toggle light / dark theme

Stealing fire from the gods: Artificial Intelligence and the evolution of thought

The feeling that we belong to something much larger and deeper than ourselves has long been a common human experience. Palaeontologist and Jesuit priest Teilhard de Chardin wrote about “a noosphere” of cognitive realisation evolving towards an “Omega point” of divine planetary spiritualisation. But it is hard to envisage that ever occurring. It is easier to envisage that we belong in an evolving intelligent power that has entered a momentous posthuman dimension though artificial intelligence.

Some futurists believe we are on the way to realising a posthuman world in which we will live on as cyborgs, or in some new embodiment of intelligent power that will absorb and supersede human intelligence. It is no longer fanciful to foresee a future in which we will have everyday interactions with androids that are powered by artificial general intelligence. They will look, move, and seem to think and respond like a human person, be skilled in simulating emotional responses realistically, and greatly out-perform us in mental activities and manual tasks. It may be we will regard them only as tools or mechanical assistants. But from their expression of human-like behaviours we may become attached to them, even to the extent of according them rights. Their design will have to ensure they don’t carry any threat, but will we be able to trust fully that this will remain the case given their technical superiority? And how far can we trust that the military, malicious groups, and rogue states won’t develop androids trained to kill people and destroy property? We know only too well about our human propensity for violent conflict.

It would be ironic if, to gain more power and control over the world, we used our human intelligence to create AI systems and devices which, for all the benefits they bring, end up managing our lives to our detriment, or even controlling us. And irony, as Greek dramatists were well aware, is often a component of fate.

AUKAS allies plan AI-power to pinpoint China’s stealth submarines

Under a new announcement, AUKUS members have unveiled their plans to make use of advanced AI to hunt for Chinese submarines.


Took-ranch/Wikimedia Commons.

The announcement came during a joint meeting in Mountain View, California, on Friday, December 1 (2023). Defense leaders from the three nations revealed their plans for “Aukus Pillar II,” a trilateral security arrangement set up in 2021 that aims to “help sustain peace and stability in the Indo-Pacific region.”

Can Palmer Luckey Reinvent the U.S. Defense Industry?

Military tech startup Anduril Industries is shaking up the U.S. defense industry as it is one of the few privately held technology companies finding success as a Defense Department contractor. But what makes the company’s software so unique that it is being used across multiple branches of the U.S. military and in both the Russia-Ukraine War and Israel-Hamas War?

WSJ explains how this startup is operating in order to disrupt the U.S. defense industry.

0:00 Anduril’s vision.
1:04 Palmer Luckey.
1:48 Software.
2:58 “Moneyball Military”
4:55 America’s defense industry consolidation.
5:53 Anduril acquiring other startups.

News Explainers.
Some days the high-speed news cycle can bring more questions than answers. WSJ’s news explainers break down the day’s biggest stories into bite-size pieces to help you make sense of the news.

#Military #Tech #WSJ

The Military’s Big Bet on Artificial Intelligence

Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

The Interplanetary War

Space Warfare concepts from science fiction often involve war between planets, and we will discuss the science of that, and war inside a fully colonized solar system or Dyson Swarm.

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE
Listen or Download the audio of this episode from Soundcloud: https://soundcloud.com/isaac-arthur-148927746/interplanetary-warfare.
Cover Art by Jakub Grygier: https://www.artstation.com/artist/jakub_grygier.

Graphics Team:
Edward Nardella.
Jarred Eagley.
Justin Dixon.
Katie Byrne.
Kris Holland of Mafic Stufios: www.maficstudios.com.
Misho Yordanov.
Pierre Demet.
Sergio Botero: https://www.artstation.com/sboterod?fref=gc.
Stefan Blandin.

Script Editing:
Andy Popescu.
Connor Hogan.
Edward Nardella.
Eustratius Graham.
Gregory Leal.
Jefferson Eagley.
Luca de Rosa.
Mark Warburton.
Michael Gusevsky.
Mitch Armstrong.
MolbOrg.
Naomi Kern.
Philip Baldock.
Sigmund Kopperud.
Steve Cardon.
Tiffany Penner.

Music:
Markus Junnikkala, “Leaving the Galaxy“
AJ Prasad, “Staring Through“
Lombus, “Amino”

Former Google CEO Eric Schmidt: AI guardrails “aren’t enough”

Guardrails AI companies add to their products to prevent them from causing harm “aren’t enough” to control AI capabilities that could endanger humanity within five to ten years, former Google CEO Eric Schmidt told Axios’ Mike Allen on Tuesday.

The big picture: Interviewed at Axios’ AI+ Summit in Washington, D.C., Schmidt compared the development of AI to the introduction of nuclear weapons at the end of the Second World War.

Hyperwar Ascendant: The Global Race For Autonomous Military Supremacy

In my 2015 exploration with General John R. Allen on the concept of Hyperwar, we recognized the potential of artificial intelligence to unalterably change the field of battle. Chief among the examples of autonomous systems were drone swarms, which are both a significant threat and a critical military capability. Today, Hyperwar seems to be the operative paradigm accepted by militaries the world over as a de facto reality. Indeed, the observe-orient-decide-act (OODA) loop is collapsing. Greater autonomy is being imbued in all manner of weapon systems and sensors. Work is ongoing to develop systems that further decrease reaction times and increase the mass of autonomous systems employed in conflict. This trend is highlighted potently by the U.S. Replicator initiative and China’s swift advancements in automated manufacturing and missile technologies.

The U.S. Replicator Initiative: A Commitment to Autonomous Warfare?

The Pentagon’s “Replicator” initiative is a strategic move to counter adversaries like China by rapidly producing “attritable autonomous systems” across multiple domains. Deputy Secretary of Defense Kathleen Hicks emphasized the need for platforms that are “small, smart, cheap, and many,” planning to produce thousands of such systems within 18 to 24 months. The Department of Defense, under this initiative, is developing smaller, more intelligent, and cost-effective platforms, a move that aligns with the creation of a Hyperwar environment.

/* */