Toggle light / dark theme

By 2050 we could get “10,000 years of technological progress”

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.

She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.

So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?

Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:
• Cars enabled carjackings and drive-by shootings, but also faster police pursuits.
• Microbiology enabled bioweapons, but also faster vaccine development.
• The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.

But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.

The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.

Why Cybersecurity Strategies and Frameworks Must Be Recalibrated in the Age of AI and Quantum Threats

#cybersecurity #ai #quantum


Artificial intelligence and quantum computing are no longer hypothetical; they are actively altering cybersecurity, extending attack surfaces, escalating dangers, and eroding existing defenses. We are in a new ear of emerging technologies that are directly impacting cybersecurity requirements.

As a seasoned observer and participant in the cybersecurity domain—through my work, teaching, and contributions to Homeland Security Today, my book “Inside Cyber: How AI, 5G, IoT, and Quantum Computing Will Transform Privacy and Our Security”, — I have consistently underscored that technological advancement is outpacing our institutions, policies, and workforce preparedness.

Current frameworks, intended for a pre-digital convergence era, are increasingly unsuitable. In order to deal with these dual-use technologies that act as force multipliers for both defenders and enemies, we must immediately adjust our strategy as time is of the essence.

Photonic processor could streamline 6G wireless signal processing

One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.

“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.

When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.

Dario Amodei — “We are near the end of the exponential”

Predicts significant advancements in AI capabilities within the next decade, which will have a profound impact on society, economy, and individuals, and emphasizes the need for careful governance, equitable distribution of benefits, and responsible development to mitigate risks and maximize benefits ## ## Questions to inspire discussion.

AI Scaling and Progress.

Q: What are the key factors driving AI progress according to the scaling hypothesis?

A: Compute, data quantity and quality, training duration, and objective functions that can scale massively drive AI progress, per Dario Amodei’s “Big Blob of Compute Hypothesis” from 2017.

Q: Why do AI models trained on broad data distributions perform better?

A: Models like GPT-2 generalize better when trained on wide variety of internet text rather than narrow datasets like fanfiction, leading to superior performance on diverse tasks.

How SpaceX and XAI Will Build Moonbase Alpha and Mass Drivers

SpaceX, in collaboration with xAI, plans to build a lunar base called Moonbase Alpha using advanced technologies such as mass drivers, solar power, and Starship, aiming to make human activity on the moon visible, affordable, and sustainable ##

## Questions to inspire discussion.

Launch Infrastructure Economics.

🚀 Q: What launch costs could SpaceX’s moon infrastructure achieve? A: Mature SpaceX moon operations could reduce costs to $10/kg to orbit and $50/kg to moon surface, enabling $5,000 moon trips for people under 100kg (comparable to expensive cruise pricing), as mentioned by Elon Musk.

⚡ Q: How could lunar mass drivers scale satellite deployment? A: Lunar mass drivers using magnetic rails at 5,600 mph could launch 10 billion tons of satellites annually with 2 terawatts of power, based on 2023 San Jose State study updating 1960s-70s mass driver literature.

Starship Capabilities.

Brett Adcock: Humanoids Run on Neural Net, Autonomous Manufacturing, and $50 Trillion Market #229

Humanoid robots with full-body autonomy are rapidly advancing and are expected to create a $50 trillion market, transforming industries, economy, and daily life ## ## Questions to inspire discussion.

Neural Network Architecture & Control.

🤖 Q: How does Figure 3’s neural network control differ from traditional robotics? A: Figure 3 uses end-to-end neural networks for full-body control, manipulation, and room-scale planning, replacing the previous C++-based control stack entirely, with System Zero being a fully learned reinforcement learning controller running with no code on the robot.

🎯 Q: What enables Figure 3’s high-frequency motor control for complex tasks? A: Palm cameras and onboard inference enable high-frequency torque control of 40+ motors for complex bimanual tasks, replanning, and error recovery in dynamic environments, representing a significant improvement over previous models.

🔄 Q: How does Figure’s data-driven approach create competitive advantage? A: Data accumulation and neural net retraining provides competitive advantage over traditional C++ code, allowing rapid iteration and improvement, with positive transfer observed as diverse knowledge enables emergent generalization with larger pre-training datasets.

🧠 Q: Where is the robot’s compute located and why? A: The brain-like compute unit is in the head for sensors and heat dissipation, while the torso contains the majority of onboard computation, with potential for latex or silicone face for human-like interaction.

JUST RECORDED: Elon Musk Announces MAJOR Company Shakeup

Elon Musk Announces MAJOR Company Changes as XAI/SpaceX ## Elon Musk is announcing significant changes and advancements across his companies, primarily focused on developing and integrating artificial intelligence (AI) to drive innovation, productivity, and growth ## ## Questions to inspire discussion.

Product Development & Market Position.

🚀 Q: How fast did xAI achieve market leadership compared to competitors?

A: xAI reached number one in voice, image, video generation, and forecasting with the Grok 4.20 model in just 2.5 years, outpacing competitors who are 5–20 years old with larger teams and more resources.

📱 Q: What scale did xAI’s everything app reach in one year?

A: In one year, xAI went from nothing to 2M Teslas using Grok, deployed a Grok voice agent API, and built an everything app handling legal questions, slide decks, and puzzles.

SpaceX Starthink: Building Earth’s Planetary Neocortex with Orbital AI

In a bold fusion of SpaceX’s satellite expertise and Tesla’s AI prowess, the Starthink Synthetic Brain emerges as a revolutionary orbital data center.

Proposed in Digital Habitats February 2026 document, this next-gen satellite leverages the Starlink V3 platform to create a distributed synthetic intelligence wrapping the planet.

Following SpaceX’s FCC filing for up to one million orbital data centers and its acquisition of xAI, Starthink signals humanity’s leap toward a Kardashev II civilization.

As Elon Musk noted in February 2026, ]

“In 36 months, but probably closer to 30, the most economically compelling place to put AI will be space.”

## The Biological Analogy.

Starthink draws from neuroscience: * Neural Cluster: A single Tesla AI5 chip, processing AI inference at ~250W, like a neuron group. * Synthetic Brain: One Starthink satellite, a 2.5-tonne self-contained node with 500 neural clusters, solar power, storage, and comms. * Planetary Neocortex: One million interconnected Brains forming a global mesh intelligence, linked by laser and microwave “synapses.”

/* */