Toggle light / dark theme

STARSHIP STARPORT NETWORK | Can Rocket Cargo Replace Air & Sea?

💹 Starship’s efficiency could potentially make it highly profitable, “making tons of money like a Tesla.”


🚨 Starship IFT-10’s success has reignited bold ideas for a Starship Starport Global Network.
Could Rocket Cargo really replace today’s air and sea freight? 🚀

In this episode of @overthehorizon, Chris Smedley and Scott Walter join me for a deep dive on the Starport Network vision — offshore launch pads, mobile rigs, and eVTOL last-mile links — and ask if suborbital rocket cargo can outcompete aircraft and ships.

We explore 👇🏽
🚀 How Starship’s scale changes global logistics.
🌍 Why rocket cargo could disrupt ports, airlines, and shipping.
⚡ The “rocket time dilation” effect that multiplies daily throughput.
🛳 From oil rigs to Starports: how offshore hubs could reshape trade.
🔮 First use-cases: military logistics, high-value freight, GCC & island tourism.

Starship Starports may be the end of hubs, choke points, and slow supply chains. But can they really replace air and sea?

Similarities between human and AI learning offer intuitive design insights

New research has found similarities in how humans and artificial intelligence integrate two types of learning, offering new insights about how people learn as well as how to develop more intuitive AI tools.

The study is published in the Proceedings of the National Academy of Sciences.

Led by Jake Russin, a postdoctoral research associate in at Brown University, the study found by training an AI system that flexible and incremental learning modes interact similarly to working memory and long-term memory in humans.

Cybercriminals Exploit X’s Grok AI to Bypass Ad Protections and Spread Malware to Millions

Cybersecurity researchers have flagged a new technique that cybercriminals have adopted to bypass social media platform X’s malvertising protections and propagate malicious links using its artificial intelligence (AI) assistant Grok.

The findings were highlighted by Nati Tal, head of Guardio Labs, in a series of posts on X. The technique has been codenamed Grokking.

The approach is designed to get around restrictions imposed by X in Promoted Ads that allow users to only include text, images, or videos, and subsequently amplify them to a broader audience, attracting hundreds of thousands of impressions through paid promotion.

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’

He explains:
⬛How AI could release a deadly virus.
⬛Why these 5 jobs might be the only ones left.
⬛How superintelligence will dominate humans.
⬛Why ‘superintelligence’ could trigger a global collapse by 2027
⬛How AI could be worse than nuclear weapons.
⬛Why we’re almost certainly living in a simulation.

00:00 Intro.
02:28 How to Stop AI From Killing Everyone.
04:35 What’s the Probability Something Goes Wrong?
04:57 How Long Have You Been Working on AI Safety?
08:15 What Is AI?
09:54 Prediction for 2027
11:38 What Jobs Will Actually Exist?
14:27 Can AI Really Take All Jobs?
18:49 What Happens When All Jobs Are Taken?
20:32 Is There a Good Argument Against AI Replacing Humans?
22:04 Prediction for 2030
23:58 What Happens by 2045?
25:37 Will We Just Find New Careers and Ways to Live?
28:51 Is Anything More Important Than AI Safety Right Now?
30:07 Can’t We Just Unplug It?
31:32 Do We Just Go With It?
37:20 What Is Most Likely to Cause Human Extinction?
39:45 No One Knows What’s Going On Inside AI
41:30 Ads.
42:32 Thoughts on OpenAI and Sam Altman.
46:24 What Will the World Look Like in 2100?
46:56 What Can Be Done About the AI Doom Narrative?
53:55 Should People Be Protesting?
56:10 Are We Living in a Simulation?
1:01:45 How Certain Are You We’re in a Simulation?
1:07:45 Can We Live Forever?
1:12:20 Bitcoin.
1:14:03 What Should I Do Differently After This Conversation?
1:15:07 Are You Religious?
1:17:11 Do These Conversations Make People Feel Good?
1:20:10 What Do Your Strongest Critics Say?
1:21:36 Closing Statements.
1:22:08 If You Had One Button, What Would You Pick?
1:23:36 Are We Moving Toward Mass Unemployment?
1:24:37 Most Important Characteristics.

Follow Dr Roman:
X — https://bit.ly/41C7f70
Google Scholar — https://bit.ly/4gaGE72

You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/4g4Jpa5

Researchers pioneer optical generative models, ushering in a new era of sustainable generative AI

In a major leap for artificial intelligence (AI) and photonics, researchers at the University of California, Los Angeles (UCLA) have created optical generative models capable of producing novel images using the physics of light instead of conventional electronic computation.

Published in Nature, the work presents a new paradigm for generative AI that could dramatically reduce energy use while enabling scalable, high-performance content creation.

Generative models, including diffusion models and , form the backbone of today’s AI revolution. These systems can create realistic images, videos, and human-like text, but their rapid growth comes at a steep cost: escalating power demands, large carbon footprints, and increasingly complex hardware requirements. Running such models requires massive computational infrastructure, raising concerns about their long-term sustainability.

/* */