Toggle light / dark theme

Forecasting Spoken Language Development in Children With Cochlear Implants Using Preimplant Magnetic Resonance Imaging

Deep transfer learning using presurgical brain MRI features predicted post–cochlear implant language improvement in children with 92% accuracy, outperforming traditional ML.


Importance Cochlear implants substantially improve spoken language in children with severe to profound sensorineural hearing loss, yet outcomes remain more variable than in children with healthy hearing. This variability cannot be reliably predicted for individual children using age at implant or residual hearing. Development of an artificial intelligence clinical tool to predict which patients will exhibit poorer improvements in language skills may enable an individualized approach to improve language outcomes.

Objective To compare the accuracy of traditional machine learning (ML) with deep transfer learning (DTL) algorithms to predict post–cochlear implant spoken language development in children with bilateral sensorineural hearing loss using a binary classification model of high vs low language improvers.

Design, Setting, and Participants This multicenter diagnostic study enrolled children from English-, Spanish-, and Cantonese-speaking families across 3 independent clinical centers in the US, Australia, and Hong Kong. A total of 278 children with cochlear implants were enrolled from July 2009 to March 2022 with 1 to 3 years of post–cochlear implant outcomes data. All children underwent pre–cochlear implant 3-dimensional volumetric brain magnetic resonance imaging (MRI). ML and DTL algorithms were trained to predict high vs low language improvers in children with cochlear implants using neuroanatomical features from presurgical brain MRI. Data were analyzed from August 2023 to April 2025.

AlphaFold Changed Science. After 5 Years, It’s Still Evolving

Until AlphaFold’s debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go. Then it started playing something more serious, aiming its deep learning algorithms at one of the most difficult problems in modern science: protein folding. The result was AlphaFold2, a system capable of predicting the three-dimensional shape of proteins with atomic accuracy.

Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world. The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs. That transition is not without challenges—such as “structural hallucinations” in the disordered regions of proteins—but it marks a step toward the future.

To understand what the next five years holds for AlphaFold, WIRED spoke with Pushmeet Kohli, vice president of research at DeepMind and architect of its AI for Science division.

From Big Bang To AI, Unified Dynamics Enables Understanding Of Complex Systems

Experiments reveal that inflation not only smooths the universe but populates it with a specific distribution of initial perturbations, creating a foundation for structure formation. The team measured how quantum fluctuations during inflation are stretched and amplified, transitioning from quantum to classical behavior through a process of decoherence and coarse-graining. This process yields an emergent classical stochastic process, captured by Langevin or Fokker-Planck equations, demonstrating how classical stochastic dynamics can emerge from underlying quantum dynamics. The research highlights that the “initial conditions” for galaxy formation are not arbitrary, but constrained by the Gaussian field generated during inflation, possessing specific correlations. This framework provides a cross-scale narrative, linking microphysics and cosmology to life, brains, culture, and ultimately, artificial intelligence, demonstrating a continuous evolution of dynamics across the universe.

Universe’s Evolution, From Cosmos to Cognition

This research presents a unified, cross-scale narrative of the universe’s evolution, framing cosmology, astrophysics, biology, and artificial intelligence as successive regimes of dynamical systems. Rather than viewing these fields as separate, the work demonstrates how each builds upon the previous, connected by phase transitions, symmetry-breaking events, and attractors, ultimately tracing a continuous chain from the Big Bang to contemporary learning systems. The team illustrates how gravitational instability shapes the cosmic web, leading to star and planet formation, and how geochemical cycles establish stable, long-lived attractors, providing the foundation for life’s emergence as self-maintaining reaction networks. The study emphasizes that the universe is not simply evolving in state, but also in its capacity for description and learning, with each transition.

Lorenz system

The Lorenz system is a three-dimensional classical dynamic system represented by three ordinary differential equations. It was first developed by the meteorologist Edward Lorenz and describes chaotic behavior of fluid movement when subjected to heating.

Although the Lorenz system is deterministic, its dynamics depend on the choice of initial parameters. For some ranges of parameters, the system is predictable as trajectories settle into fixed points or simple periodic orbits. In contrast, for other parameter ranges, the system becomes chaotic and the solutions never settle down but instead trace out the butterfly-shaped Lorenz attractor, popularly known as butterfly effect. In this regime, small differences in initial conditions grows exponentially making long-term prediction practically impossible.

Ignorance Is the Greatest Evil: Why Certainty Does More Harm Than Malice

The most dangerous people are not the malicious ones. They’re the ones who are certain they’re right.

Most of the harm in history has been done by people who believed they knew what was right — and acted on that belief without recognizing the limits of their own knowledge.

Socrates understood this long ago: the most dangerous is not *not knowing*, but *not knowing that we don’t know* — especially when paired with power.

Read on to find why:

* certainty often does more harm than malice * humility isn’t weakness, it’s discipline * action doesn’t require certainty, only responsibility * and why, in an age of systems, algorithms, and institutions, has quietly become structural.

This isn’t an argument for paralysis or relativism.

It’s an argument for acting without pretending we are infallible.

AI learns to build simple equations for complex systems

A research team at Duke University has developed a new AI framework that can uncover simple, understandable rules that govern some of the most complex dynamics found in nature and technology.

The AI system works much like how history’s great “dynamicists”—those who study systems that change over time—discovered many laws of physics that govern such systems’ behaviors. Similar to how Newton, the first dynamicist, derived the equations that connect force and movement, the AI takes data about how complex systems evolve over time and generates equations that accurately describe them.

The AI, however, can go even further than human minds, untangling complicated nonlinear systems with hundreds, if not thousands, of variables into simpler rules with fewer dimensions.

Making lighter work of calculating fluid and heat flow

Scientists from Tokyo Metropolitan University have re-engineered the popular Lattice-Boltzmann Method (LBM) for simulating the flow of fluids and heat, making it lighter and more stable than the state-of-the-art.

By formulating the algorithm with a few extra inputs, they successfully got around the need to store certain data, some of which span the millions of points over which a simulation is run. Their findings might overcome a key bottleneck in LBM: memory usage.

The work is published in the journal Physics of Fluids.

Cracking the mystery of heat flow in few-atoms thin materials

For much of my career, I have been fascinated by the ways in which materials behave when we reduce their dimensions to the nanoscale. Over and over, I’ve learned that when we shrink a material down to just a few nanometers in thickness, the familiar textbook rules of physics begin to bend, stretch, or sometimes break entirely. Heat transport is one of the areas where this becomes especially intriguing, because heat is carried by phonons—quantized vibrations of the atomic lattice—and phonons are exquisitely sensitive to spatial confinement.

A few years ago, something puzzling emerged in the literature. Molecular dynamics simulations showed that ultrathin silicon films exhibit a distinct minimum in their thermal conductivity at around one to two nanometers thickness, which corresponds to just a few atomic layers. Even more surprisingly, the thermal conductivity starts to increase again if the material is made even thinner, approaching extreme confinement and the 2D limit.

This runs counter to what every traditional model would predict. According to classical theories such as the Boltzmann transport equation or the Fuchs–Sondheimer boundary-scattering framework, reducing thickness should monotonically suppress thermal conductivity because there is simply less room for phonons to travel freely and carry heat around. Yet the simulations done by the team of Alan McGaughey at Carnegie Mellon University in Pittsburgh insisted otherwise, and no established theory could explain why.

Why we can’t stop clicking on rage bait

Stanford research reveals creators feel exhausted, depressed, and financially unstable due to constant pressure to post, algorithm unpredictability, and frequent “demonetization.” While rage bait may work short-term, it’s unsustainable. Creators eventually seek other revenue streams, only to be replaced by new outrage merchants.

Bottom line: Rage bait is a symptom of platforms’ engagement-based economic incentives—not an isolated phenomenon, but a “highly visible result” of the ecosystem social media companies have created.


“Rage bait” is Oxford’s Word of the Year. What makes anger so appealing?

String Theory Inspires a Brilliant, Baffling New Math Proof

When the team posted their proof in August, many mathematicians were excited. It was the biggest advance in the classification project in decades, and hinted at a new way to tackle the classification of polynomial equations well beyond four-folds.

But other mathematicians weren’t so sure. Six years had passed since the lecture in Moscow. Had Kontsevich finally made good on his promise, or were there still details to fill in?

And how could they assuage their doubts, when the proof’s techniques were so completely foreign — the stuff of string theory, not polynomial classification? “They say, ‘This is black magic, what is this machinery?’” Kontsevich said.

/* */