Toggle light / dark theme

We’ve Been Invaded By Aliens… Now What?

If aliens invaded, how would humanity respond? From planetary defense and space warfare to unexpected diplomatic twists, we’ll explore the strategies, technologies, and scenarios that could decide our fate.

Watch my exclusive video Post-Consciousness Civilizations: https://nebula.tv/videos/isaacarthur–
Get Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur.
Get a Lifetime Membership to Nebula for only $300: https://go.nebula.tv/lifetime?ref=isa
Use the link gift.nebula.tv/isaacarthur to give a year of Nebula to a friend for just $30.

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: / isaacarthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-a
Facebook Group: / 1583992725237264
Reddit: / isaacarthur.
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord.
Credits:
We’ve Been Invaded By Aliens… Now What?
Episode 494a; April 13, 2025
Written, Produced & Narrated by: Isaac Arthur.
Edited by: Ludwig Luska.
Graphics: Jeremy Jozwik, Ken York YD Visual, Legiontech Studios.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.
Sergey Cheremisinov, \

Predator spyware hooks iOS SpringBoard to hide mic, camera activity

Intellexa’s Predator spyware can hide iOS recording indicators while secretly streaming camera and microphone feeds to its operators.

The malware does not exploit any iOS vulnerability but leverages previously obtained kernel-level access to hijack system indicators that would otherwise expose its surveillance operation.

Apple introduced recording indicators on the status bar in iOS 14 to alert users when the camera or microphone is in use, displaying a green or an orange dot, respectively.

Impact of cellular proteases on the function of antiviral antibodies

Microbiology Monday: Antibodies play a key role in clearing viruses from the body—but there are a lot of ways viruses evade them. For instance, viral infections can hijack host proteases to reduce antibody effectiveness. These proteases lob off viral antigens expressed on host cell membranes, creating soluble decoys that bind antibodies and hinder their neutralizing powers, among other mechanisms. Learn more in JVirology.


Viruses and their hosts have been co-evolving in a continual arms race for fitness and survival, respectively (1). In humans, the innate and adaptive arms of immunity intimately interact to control infection. Antibodies (Abs), secreted by certain activated B cells, are an essential portion of the adaptive immune response and are a major pillar in the viral clearance of both enveloped viruses as well as some non-enveloped viruses (15). Certain antibodies are developed with the ability, through their fragment antigen-binding region, to bind to viral epitopes and, through a variety of methods (e.g., steric obstruction or changing conformation), result in the neutralization of the target antigen (4).

Antibodies are also a bridge between the adaptive and innate immune responses. Through their fragment crystallizable (Fc) region, antibodies bind to either activators of the complement system or Fc Receptors (FcR) on effector cells, inducing the so-called antibody “effector” or “non-neutralizing” functions, such as complement-mediated cytotoxicity, antibody-dependent cellular cytotoxicity (ADCC), or antibody-dependent cellular phagocytosis (68). Together, neutralization and effector function induction place antibodies as correlates of protection across many infections (911), as well as at the center of vaccine and therapeutic monoclonal antibody design (24, 12).

Apart from complement activation, induction of effector functions depends on the formation of an immune synapse between an antibody-coated target and an effector cell. Globally, this immune synapse depends on Ab density on the target membrane, cofactors within the effector cells (adhesion molecules, signaling molecules, or cofactors such as NKG2D on NK cells), and conditioning by the microenvironment (cytokines, pH, etc.). For complement, completion of the cascade and elimination of viruses and/or infected cells depend on the initial hexamerization of the antibody’s Fc on the target surface and the presence and activity of several inhibitory factors existing within the cascade (7, 11).

Unhackable metasurface holograms: Security technology can lock information with light color and distance

A research team led by Professor Junsuk Rho at POSTECH (Pohang University of Science and Technology) has developed a secure hologram platform that operates solely based on the wavelength of light and the spacing between metasurface layers. The technology makes hacking and counterfeiting virtually impossible, and is expected to be widely adopted for security cards, anticounterfeiting, and military communications. The paper is published in the journal Advanced Functional Materials.

With a growing number of hacking incidents and data breaches, the limitations of digital security are becoming increasingly evident. No matter how sophisticated an encryption scheme is, as long as it exists as code, it is difficult to completely eliminate the risk of intrusion. Motivated by this challenge, the team proposed a new approach that uses the physical conditions of light itself as a security key.

At the core of this innovation is the “metasurface,” an ultrathin optical device that arranges microscopic structures to control light. By illuminating a metasurface, a holographic image can be reconstructed in free space. However, conventional holograms have typically been limited in that a single device could store only one piece of information.

GOOD LUCK, HAVE FUN, DON’T DIE — Welcome To The Perfect Prison

Gore Verbinski’s Good Luck, Have Fun, Dont Die hits like a nasty mirror held up at the worst possible angle. On paper, the setup sounds almost playful: a “Man From the Future” drops into a diner in Los Angeles and has to recruit the exact combination of disgruntled strangers for a one-night mission to stop a rogue AI. But the horror isn’t metal skeletons and laser fire. It’s the idea that the end of humanity doesn’t arrive with an explosion. It arrives with an upgrade. A perfectly tuned stream of algorithmic entertainment that doesn’t merely distract people—it replaces them. A manufactured paradise so frictionless, so gratifying, so chemically rewarding, that the messy, strenuous, inconvenient act of being human starts to feel obsolete.

#goodluckhavefundontdie #samrockwell #ai #algorithm.

Check out my playlists on film here — Film Explored — • Film Explored.

Check out my playlist on Alien here — • New to Aliens? Start Here.

Check out my playlist on Predator here — • New to Predator? Start Here.

By 2050 we could get “10,000 years of technological progress”

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.

She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.

So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?

Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:
• Cars enabled carjackings and drive-by shootings, but also faster police pursuits.
• Microbiology enabled bioweapons, but also faster vaccine development.
• The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.

But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.

The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.

Researcher skeptical of ‘Havana syndrome’ tested secret weapon on himself

“Working in strict secrecy, a government scientist in Norway built a machine capable of emitting powerful pulses of microwave energy and, in an effort to prove such devices are harmless to humans, in 2024 tested it on himself. He suffered neurological symptoms similar to those of ”Havana syndrome,” the unexplained malady that has struck hundreds of U.S. spies and diplomats around the world.

The bizarre story, described by four people familiar with the events, is the latest wrinkle in the decade-long quest to find the causes of Havana syndrome, whose sufferers experience long-lasting effects including cognitive challenges, dizziness and nausea. The U.S. government calls the events Anomalous Health Incidents (AHIs).

The secret test in Norway has not been previously reported. The Norwegian government told the CIA about the results, two of the people said, prompting at least two visits in 2024 to Norway by Pentagon and White House officials.


The CIA investigated a Norwegian government experiment with a pulsed-energy machine in which a researcher built and tested a ”Havana syndrome” device on himself.

Overtime with Bill Maher: Jonathan Haidt, Stephanie Ruhle, H.R. McMaster (HBO)

Artificial intelligence is rapidly advancing to the point where it may be able to write its own code, potentially leading to significant job displacement, societal problems, and concerns about unregulated use in areas like warfare.

## Questions to inspire discussion.

Career Adaptation.

🎯 Q: How should workers prepare for AI’s impact on employment? A: 20% of jobs including coders, medical, consulting, finance, and accounting roles will be affected in the next 5 years, requiring workers to actively learn and use large language models to enhance productivity or risk being left behind in the competitive landscape.

Economic Policy.

📊 Q: What systemic response is needed for AI-driven job displacement? A: Government planning is essential to manage massive economic transitions and job losses as AI’s exponential growth reaches a tipping point, extending beyond manufacturing into white-collar professions across multiple sectors.

The Frontier Labs War: Opus 4.6, GPT 5.3 Codex, and the SuperBowl Ads Debacle

Questions to inspire discussion AI Model Performance & Capabilities.

🤖 Q: How does Anthropic’s Opus 4.6 compare to GPT-5.2 in performance?

A: Opus 4.6 outperforms GPT-5.2 by 144 ELO points while handling 1M tokens, and is now in production with recursive self-improvement capabilities that allow it to rewrite its entire tech stack.

🔧 Q: What real-world task demonstrates Opus 4.6’s agent swarm capabilities?

A: An agent swarm created a C compiler in Rust for multiple architectures in weeks for **$20K, a task that would take humans decades, demonstrating AI’s ability to collapse timelines and costs.

🐛 Q: How effective is Opus 4.6 at finding security vulnerabilities?

The Singularity: Everyone’s Certain. Everyone’s Guessing

The Technological Singularity is the most overconfident idea in modern futurism: a prediction about the point where prediction breaks. It’s pitched like a destination, argued like a religion, funded like an arms race, and narrated like a movie trailer — yet the closer the conversation gets to specifics, the more it reveals something awkward and human. Almost nobody is actually arguing about “the Singularity.” They’re arguing about which future deserves fear, which future deserves faith, and who gets to steer the curve when it stops looking like a curve and starts looking like a cliff.

The Singularity begins as a definitional hack: a word borrowed from physics to describe a future boundary condition — an “event horizon” where ordinary forecasting fails. I. J. Good — British mathematician and early AI theorist — framed the mechanism as an “intelligence explosion,” where smarter systems build smarter systems and the loop feeds on itself. Vernor Vinge — computer scientist and science-fiction author — popularized the metaphor that, after superhuman intelligence, the world becomes as unreadable to humans as the post-ice age would have been to a trilobite.

In my podcast interviews, the key move is that “Singularity” isn’t one claim — it’s a bundle. Gennady Stolyarov II — transhumanist writer and philosopher — rejects the cartoon version: “It’s not going to be this sharp delineation between humans and AI that leads to this intelligence explosion.” In his framing, it’s less “humans versus machines” than a long, messy braid of tools, augmentation, and institutions catching up to their own inventions.

/* */