Toggle light / dark theme

ASI Risks: Similar premises, opposite conclusions | Eliezer Yudkowsky vs Mark Miller

A debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy.

“What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky and Mark Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues that such a pause would make an ASI singleton more likely – which he sees as the greatest danger.”


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.

They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.

Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.

Can LLMs figure out the real world? New metric measures AI’s predictive power

In the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood.

Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth—or how to launch a satellite from Earth to the surface of the moon or planets.

Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws?

STARSHIP STARPORT NETWORK | Can Rocket Cargo Replace Air & Sea?

💹 Starship’s efficiency could potentially make it highly profitable, “making tons of money like a Tesla.”


🚨 Starship IFT-10’s success has reignited bold ideas for a Starship Starport Global Network.
Could Rocket Cargo really replace today’s air and sea freight? 🚀

In this episode of @overthehorizon, Chris Smedley and Scott Walter join me for a deep dive on the Starport Network vision — offshore launch pads, mobile rigs, and eVTOL last-mile links — and ask if suborbital rocket cargo can outcompete aircraft and ships.

We explore 👇🏽
🚀 How Starship’s scale changes global logistics.
🌍 Why rocket cargo could disrupt ports, airlines, and shipping.
⚡ The “rocket time dilation” effect that multiplies daily throughput.
🛳 From oil rigs to Starports: how offshore hubs could reshape trade.
🔮 First use-cases: military logistics, high-value freight, GCC & island tourism.

Starship Starports may be the end of hubs, choke points, and slow supply chains. But can they really replace air and sea?

Similarities between human and AI learning offer intuitive design insights

New research has found similarities in how humans and artificial intelligence integrate two types of learning, offering new insights about how people learn as well as how to develop more intuitive AI tools.

The study is published in the Proceedings of the National Academy of Sciences.

Led by Jake Russin, a postdoctoral research associate in at Brown University, the study found by training an AI system that flexible and incremental learning modes interact similarly to working memory and long-term memory in humans.

Cybercriminals Exploit X’s Grok AI to Bypass Ad Protections and Spread Malware to Millions

Cybersecurity researchers have flagged a new technique that cybercriminals have adopted to bypass social media platform X’s malvertising protections and propagate malicious links using its artificial intelligence (AI) assistant Grok.

The findings were highlighted by Nati Tal, head of Guardio Labs, in a series of posts on X. The technique has been codenamed Grokking.

The approach is designed to get around restrictions imposed by X in Promoted Ads that allow users to only include text, images, or videos, and subsequently amplify them to a broader audience, attracting hundreds of thousands of impressions through paid promotion.

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’

He explains:
⬛How AI could release a deadly virus.
⬛Why these 5 jobs might be the only ones left.
⬛How superintelligence will dominate humans.
⬛Why ‘superintelligence’ could trigger a global collapse by 2027
⬛How AI could be worse than nuclear weapons.
⬛Why we’re almost certainly living in a simulation.

00:00 Intro.
02:28 How to Stop AI From Killing Everyone.
04:35 What’s the Probability Something Goes Wrong?
04:57 How Long Have You Been Working on AI Safety?
08:15 What Is AI?
09:54 Prediction for 2027
11:38 What Jobs Will Actually Exist?
14:27 Can AI Really Take All Jobs?
18:49 What Happens When All Jobs Are Taken?
20:32 Is There a Good Argument Against AI Replacing Humans?
22:04 Prediction for 2030
23:58 What Happens by 2045?
25:37 Will We Just Find New Careers and Ways to Live?
28:51 Is Anything More Important Than AI Safety Right Now?
30:07 Can’t We Just Unplug It?
31:32 Do We Just Go With It?
37:20 What Is Most Likely to Cause Human Extinction?
39:45 No One Knows What’s Going On Inside AI
41:30 Ads.
42:32 Thoughts on OpenAI and Sam Altman.
46:24 What Will the World Look Like in 2100?
46:56 What Can Be Done About the AI Doom Narrative?
53:55 Should People Be Protesting?
56:10 Are We Living in a Simulation?
1:01:45 How Certain Are You We’re in a Simulation?
1:07:45 Can We Live Forever?
1:12:20 Bitcoin.
1:14:03 What Should I Do Differently After This Conversation?
1:15:07 Are You Religious?
1:17:11 Do These Conversations Make People Feel Good?
1:20:10 What Do Your Strongest Critics Say?
1:21:36 Closing Statements.
1:22:08 If You Had One Button, What Would You Pick?
1:23:36 Are We Moving Toward Mass Unemployment?
1:24:37 Most Important Characteristics.

Follow Dr Roman:
X — https://bit.ly/41C7f70
Google Scholar — https://bit.ly/4gaGE72

You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/4g4Jpa5

Researchers pioneer optical generative models, ushering in a new era of sustainable generative AI

In a major leap for artificial intelligence (AI) and photonics, researchers at the University of California, Los Angeles (UCLA) have created optical generative models capable of producing novel images using the physics of light instead of conventional electronic computation.

Published in Nature, the work presents a new paradigm for generative AI that could dramatically reduce energy use while enabling scalable, high-performance content creation.

Generative models, including diffusion models and , form the backbone of today’s AI revolution. These systems can create realistic images, videos, and human-like text, but their rapid growth comes at a steep cost: escalating power demands, large carbon footprints, and increasingly complex hardware requirements. Running such models requires massive computational infrastructure, raising concerns about their long-term sustainability.

/* */