Toggle light / dark theme

The Race to Harness Quantum Computing’s Mind-Bending Power | The Future With Hannah Fry

Get “The AI Career Survival Guide” here: https://technomics.gumroad.com/l/ai-survival-guide.
What happens when human labor becomes mathematically obsolete? For thousands of years, the global economy has run on the biological engine of human workers. But a new era has arrived: The Physical Singularity.
In this video, we break down the brutal thermodynamics of the labor inversion, revealing how major AI companies are mass-producing humanoid robots that operate for just 57 cents an hour. We expose the massive industry shift from digital generation to “World Models,” and how China’s manufacturing miracle is driving hardware costs to zero. With 10 billion robots projected by the 2040s, experts like Geoffrey Hinton are warning of a hive-mind “alien intelligence.” The digital era is over. The physical agent era has begun.
Welcome to Technomics. If you want to stay ahead of the curve and understand the real impact of the AI revolution, hit that subscribe button.
Sources & Research Links:
The 57¢ / Hour Labor Inversion Math: https://www.ark-invest.com/articles/valuation-models/ark-pub…oid-robots.
Unitree G1 Official $16,000 Pricing: https://www.unitree.com/g1/
China’s 2024 Robotics Dominance (IFR Report): https://ifr.org/ifr-press-releases/news/china-dominates-industrial-robot-market.
Elon Musk’s 10 Billion Robot Prediction: https://www.youtube.com/watch?v=ODsjGOGX_oM
Geoffrey Hinton on AI Hive Mind (“Immortality, but it’s not for us”): https://www.youtube.com/watch?v=qpoRO378qRY
Geordie Rose on Alien Intelligence (“The same way you don’t care about an ant”): https://www.youtube.com/watch?v=1pd4i2YlGmc.
DeepSeek AI Cost Efficiency Breakthroughs: https://www.deepseek.com/
Timestamps:
00;00 — The 57¢ Workforce & The Great Deception.
02;48 — The Math of the Labor Inversion.
05;01 — Why OpenAI Killed Sora (World Models)
09;16 — The Manufacturing Miracle: China’s Hardware Collapse.
12;53 — 10 Billion Robots & Alien Intelligence.
15;58 — How to Survive the Singularity.
Disclaimer:
The content in this video is for educational and informational purposes only and does not constitute financial or investment advice. The views and opinions expressed in this video are based on current research and industry trends, which are subject to rapid change. We do not guarantee the accuracy or completeness of the projections discussed. Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education, and research.
#PhysicalSingularity #HumanoidRobots #ArtificialIntelligence #OpenAI #FutureOfWork #TechTrends

Read more

Joscha Bach & Anders Sandberg — AI, Consciousness and the Cyborg Leviathan

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Post: https://www.scifuture.org/minds-in-th… thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P

Kind regards.

Ben Goertzel responds

As part of Future Day 2026, we hosted a conversation between two of the most provocative minds in AGI – Ben Goertzel and Hugo de Garis (with Adam Ford as moderator/provocateur) – to tackle the ultimate existential question: Is an Artilect War inevitable, and should humanity accept becoming the “number two” species?

The discussion will build upon last years discussion between Ben and Hugo on AGI and the Singularity.

It will explore the idea of human transcendence. If we can’t beat them, do we join them?

Will humanity transcend into a Jupiter brain quectotech utility fog?

Is the Artilect War the inevitable conclusion of biological intelligence? Or can we find a path toward existing in a universe that still finds us aesthetically pleasing?

0:00 Intro.

Cyborgian: GRAB Your AI Career Survivial Guide from here: 👉 top futurist Ray Kurzweil

Renowned for his remarkably accurate tech predictions, has just moved his singularity date forward! He now claims that by 2039, humans will begin merging with machines, potentially transforming what it m.

Google’s top futurist Ray Kurzweil, renowned for his remarkably accurate tech predictions, has just moved his singularity date forward! He now claims that by 2039, humans will begin merging with machines, potentially transforming what it means to be human. In this video, we dive into Kurzweil’s latest forecast, exploring why he’s shaved six years off his original prediction and whether the age of human-machine hybrids is just 14 years away.

Known for his 86% accuracy rate in predicting the future, Kurzweil’s past successes, including the mainstream internet, wireless technology, and AI that understands speech, give weight to his bold claims. But is the idea of computers matching human brains by 2029 and a millionfold intelligence boost by 2045 truly within reach?

We’ll break down:

00:00 — 01:37 Introduction
01:37 — 02:41 THE PROPHET WITH AN 86% SUCCESS RATE
02:41 — 04:06 THE ACCELERATION NOBODY SAW COMING
04:06 — 06:36 A GENTLE SINGULARITY
06:36 — 08:27 THE RESISTANCE
08:27 — 10:28 THE WORLD FORWARD
10:28 — 11:04 CONCLUSION

🔗 Links & Sources:

The Singularity Needs a Navigator

In 2013, physicist Alex Wissner-Gross published a single equation for intelligence in [ITALIC] Physical Review Letters [/ITALIC]: # F = T∇Sτ

The force of an intelligent system equals its temperature — computational capacity, raw horsepower — multiplied by the gradient of its future option-space. Intelligence is not a mysterious property of carbon-based brains.

It is a physical force: the tendency of any sufficiently energetic system to maximize the number of future states accessible to it.

The equation was elegant. Correct. And incomplete.

It describes the force. It does not describe the geometry of the space through which that force navigates.

A gradient without a metric is a direction without distance — it tells the system where to push but not what distortion it will encounter on the way there.

We spent three years building the geometry. We tested it across 69 billion simulations. What we found changes everything. ## The Missing Geometry — From Force to Navigation.

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Let’s unravel what happens when AI merges with quantum, and starts knowing EVERYTHING ♾️ Go to to get 83% off from our sponsor Private Internet Access with 4 months free!

Want to support our production? Feel free to join our membership at https://www.youtube.com/BeeyondIdeas/join.

Special thanks to our beloved YouTube members this month: Powlin Manuel, Saïd Kadi, Chenxi, Lord, Sudhir Paranjape, Nate Lachae, Alison Rewell, Thomas Lapins, Ahmad Salahudin, Antonio Ferriol Colombram, Anton Nicolas Burger 🚀🚀🚀

Experts featured in this video include Demis Hassabis, Tristan Harris, Aza Raskin, Elon Musk, David Deutsch, Michio Kaku, Brian Greene and Nick Bostrom.

Chapter:
0:00 A dangerous truth?
1:29 AI advancement.
3:46 AI pretending not to know.
7:29 Interactive tutoring.
9:37 That’s it from our sponsor!
10:21 The merging of QC and AI
12:03 IBM 100,000 qubits.
14:34 AI wipes out humanity?
16:05 Google Willow.
17:06 The misuse of AI and QC
18:22 Singularity and Turing test.
22:51 Reverse Turing test.
29:39 Quantum-AI consequences.
32:25 The double slit experiment.
36:15 Quantum multiverse.
41:05 Computing history.
46:49 AGI timeline.
51:45 Philosophical consequence.

#AI #quantumcomputing #singularity

Read more

What Happens When Quantum-AI Knows TOO MUCH?

Let’s unravel what happens when AI merges with quantum, and starts knowing EVERYTHING ♾️ Go to https://piavpn.com/beeyondideas to get 83% off from our sponsor Private Internet Access with 4 months free!

Want to support our production? Feel free to join our membership at https://youtu.be/_Z4W6sWDo_4?si=Q8eRZoNFUv7sAd9y Special thanks to our beloved YouTube members this month: Powlin Manuel, Saïd Kadi, Chenxi, Lord, Sudhir Paranjape, Nate Lachae, Alison Rewell, Thomas Lapins, Ahmad Salahudin, Antonio Ferriol Colombram, Anton Nicolas Burger 🚀🚀🚀 Experts featured in this video include Demis Hassabis, Tristan Harris, Aza Raskin, Elon Musk, David Deutsch, Michio Kaku, Brian Greene and Nick Bostrom. Chapter: 0:00 A dangerous truth? 1:29 AI advancement 3:46 AI pretending not to know 7:29 Interactive tutoring 9:37 That’s it from our sponsor! 10:21 The merging of QC and AI 12:03 IBM 100,000 qubits 14:34 AI wipes out humanity? 16:05 Google Willow 17:06 The misuse of AI and QC 18:22 Singularity and Turing test 22:51 Reverse Turing test 29:39 Quantum-AI consequences 32:25 The double slit experiment 36:15 Quantum multiverse 41:05 Computing history 46:49 AGI timeline 51:45 Philosophical consequence #AI #quantumcomputing #singularity.

Special thanks to our beloved YouTube members this month: Powlin Manuel, Saïd Kadi, Chenxi, Lord, Sudhir Paranjape, Nate Lachae, Alison Rewell, Thomas Lapins, Ahmad Salahudin, Antonio Ferriol Colombram, Anton Nicolas Burger 🚀🚀🚀

Experts featured in this video include Demis Hassabis, Tristan Harris, Aza Raskin, Elon Musk, David Deutsch, Michio Kaku, Brian Greene and Nick Bostrom.

Chapter:
0:00 A dangerous truth?
1:29 AI advancement.
3:46 AI pretending not to know.
7:29 Interactive tutoring.
9:37 That’s it from our sponsor!
10:21 The merging of QC and AI
12:03 IBM 100,000 qubits.
14:34 AI wipes out humanity?
16:05 Google Willow.
17:06 The misuse of AI and QC
18:22 Singularity and Turing test.
22:51 Reverse Turing test.
29:39 Quantum-AI consequences.
32:25 The double slit experiment.
36:15 Quantum multiverse.
41:05 Computing history.
46:49 AGI timeline.
51:45 Philosophical consequence.

#AI #quantumcomputing #singularity

Andrew Yang: UBI Before UHI

Solving Job Loss, and the Future of Work ## Andrew Yang advocates for the implementation of Universal Basic Income (UBI) as a necessary solution to address job loss, income inequality, and societal unrest caused by technological advancements and AI-driven changes in the economy ## ## Questions to inspire discussion.

Universal Basic Income Implementation.

🔹 Q: What UBI amount should be set to provide an effective safety net?

A: UBI should be set at twice the poverty level, around $25,000 per person per year, providing enough for survival but not happiness to maintain work incentives while protecting against economic collapse.

🔹 Q: How can UBI be funded without government action initially?

A: Well-resourced tech billionaires could fund UBI directly to local communities to keep the middle class afloat during AI-driven changes, potentially catalyzing further philanthropy and government action.

/* */