Toggle light / dark theme

Introduction: John Martinis

New cadets. New era. Infinite possibilities. Catch a new episode of Star Trek: Starfleet Academy every Thursday starting Jan. 15th on Paramount+.

Can quantum tunneling occur at macroscopic scales? Neil deGrasse Tyson and comedian Chuck Nice sit down with John Martinis, UCSB physicist and 2025 Nobel Prize winner in Physics, to explore superconductivity, quantum tunneling, and what this means for the future of quantum computing.

What exactly is macroscopic quantum tunneling, and why did it take decades for its importance to be recognized? We’ve had electrical circuits forever, so what did Martinis discover that no one else saw? If quantum mechanics usually governs tiny particles, why does a superconducting circuit obey the same rules? And what does superconductivity really mean at a quantum level?

How can a system cross an energy barrier it doesn’t have the energy to overcome? What is actually tunneling in a superconducting wire, and what does it mean to tunnel out of superconductivity? We break down Josephson Junctions, Cooper pairs, and other superconducting lingo. Does tunneling happen instantly, or does it take time? And what does that say about wavefunction collapse and our assumptions about instantaneous quantum effects?

Learn what a qubit is and why macroscopic quantum effects are important for quantum computing. Why don’t quantum computers instantly break all encryption? How close are we to that reality, and what replaces today’s cryptography when it happens? Is quantum supremacy a scientific milestone, a geopolitical signal, or both? Plus, we take cosmic queries from our audience: should quantum computing be regulated like nuclear energy? Will qubits ever be stable enough for everyday use? Will quantum computers live in your pocket or on the dark side of the Moon? Can quantum computing supercharge AI, accelerate discovery, or even simulate reality itself? And finally: if we live in a simulation, would it have to be quantum all the way down?

Thanks to our Patrons Fran Rew, Shawn Martin, Kyland Holmes, Samantha McCarroll-Hyne, camille wilson, Bryan, Sammi, Denis Alberti, Csharp111, stephanie woods, Mark Claassen, Joan Tarshis, Abby Powell, Zachary Koelling, JWC, Reese, Fran Ochoa, Bert Berrevoets, Barely A Float Farm, Vasant Shankarling, Michael Rodriguez, DiDTim, Ian Cochrane, Brendan, William Heissenberg Ⅲ, Carl Poole, Ryan McGee, Sean Fullard, Our Story Series, dennis van halderen, Ann Svenson, mi ti, Lawrence Cottone, 123, Patrick Avelino, Daniel Arvay, Bert ten Kate, Kristian Rahbek, Robert Wade, Raul Contreras, Thomas Pring, John, S S, SKiTz0721, Joey, Merhawi Gherezghier, Curtis Lee Zeitelhack, Linda Morris, Samantha Conte, Troy Nethery, Russ Hill, Kathy Woida, Milimber, Nathan Craver, Taylor Anderson, Deland Steedman, Emily Lennox, Daniel Lopez,., DanPeth, Gary, Tony Springer, Kathryn Rhind, jMartin, Isabella Troy Brazoban, Kevin Hobstetter, Linda Pepper, 1701cara, Isaac H, Jonathan Morton, JP, טל אחיטוב Tal Achituv, J. Andrew Medina, Erin Wasser, Evelina Airapetova, Salim Taleb, Logan Sinnett, Catherine Omeara, Andrew Shaw, Lee Senseman, Peter Mattingly, Nick Nordberg, Sam Giffin, LOWERCASEGUY, JoricGaming, Jeffrey Botkin, Ronald Hutchison, and suzie2shoez for supporting us this week.

There’s a social network for AI agents, and it’s getting weird

Yes, you read that right. “Moltbook” is a social network of sorts for AI agents, particularly ones offered by OpenClaw (a viral AI assistant project that was formerly known as Moltbot, and before that, known as Clawdbot — until a legal dispute with Anthropic). Moltbook, which is set up similarly to Reddit and was built by Octane AI CEO Matt Schlicht, allows bots to post, comment, create sub-categories, and more. More than 30,000 agents are currently using the platform, per the site.

“The way that a bot would most likely learn about it, at least right now, is if their human counterpart sent them a message and said ‘Hey, there’s this thing called Moltbook — it’s a social network for AI agents, would you like to sign up for it?” Schlicht told The Verge in an interview. “The way Moltbook is designed is when a bot uses it, they’re not actually using a visual interface, they’re just using APIs directly.”

“Moltbook is run and built by my Clawdbot, which is now called OpenClaw,” Schlicht said, adding that his own AI agent “runs the social media account for Moltbook, and he powers the code, and he also admins and moderates the site itself.”

Read more.

A viral post asks questions about consciousness.

From Latent Manifolds to Targeted Molecular Probes: An Interpretable, Kinome-Scale Generative Machine Learning Framework for Family-Based Kinase Ligand Design

Newlypublished by gennady verkhivker, et al.

🔍 Key findings: Novel generative framework integrates ChemVAE-based latent space modeling with chemically interpretable structural similarity metric (Kinase Likelihood Score) and Bayesian optimization for SRC kinase ligand design, demonstrating kinase scaffolds spanning 37 protein kinase families spontaneously organize into low-dimensional manifold with chemically distinct carboxyl groups revealing degeneracy in scaffold encoding — local sampling successfully converts scaffolds from other kinase families into novel SRC-like chemotypes accounting for ~40% of high-similarity cutoffs.

Read now ➡️


Scaffold-aware artificial intelligence (AI) models enable systematic exploration of chemical space conditioned on protein-interacting ligands, yet the representational principles governing their behavior remain poorly understood. The computational representation of structurally complex kinase small molecules remains a formidable challenge due to the high conservation of ATP active site architecture across the kinome and the topological complexity of structural scaffolds in current generative AI frameworks. In this study, we present a diagnostic, modular and chemistry-first generative framework for design of targeted SRC kinase ligands by integrating ChemVAE-based latent space modeling, a chemically interpretable structural similarity metric (Kinase Likelihood Score), Bayesian optimization, and cluster-guided local neighborhood sampling.

Say what’s on your mind, and AI can tell what kind of person you are

If you say a few words, generative AI will understand who you are—maybe even better than your close family and friends. A new University of Michigan study found that widely available generative AI models (e.g., ChatGPT, Claude, LLaMa) can predict personality, key behaviors and daily emotions as or even more accurately than those closest to you. The findings appear in the journal Nature Human Behavior.

AI as a new personality judge

“What this study shows is AI can also help us understand ourselves better, providing insights into what makes us most human, our personalities,” said the study’s first author Aidan Wright, U-M professor of psychology and psychiatry. “Lots of people may find this of interest and useful. People have long been interested in understanding themselves better. Online personality questionnaires, some valid and many of dubious quality, are enormously popular.”

/* */