Toggle light / dark theme

An ultrastructural map of a spinal sensorimotor circuit reveals the potential of astroglia modulation

Using cell reconstructions and synapse mapping in zebrafish, Koh and Avalos Arceo et. al. reveal a vertebrate local spinal sensorimotor circuit map, revealing how neurons and glia are structurally positioned in a circuit. This resource provides insight into how glia and synaptic thresholding could modulate information flow through complex neural networks.

Meet the soft humanoid robot that can grow, shrink, fly and walk on water

Humanoid robots look impressive and have enormous potential to change our daily lives, but they still have a reputation for being clunky. They’re also heavy and stiff, and if they fall, they can easily break and injure people around them.

But that could be about to change. Researchers at the Southern University of Science and Technology (SUST) in Shenzhen have unveiled a soft humanoid robot that can change its size, squeeze through spaces, and even walk on water. The key to this outstanding flexibility is a system the team developed called GrowHR. The research, published in Science Advances, describes how the robot was inspired by the way human bones develop.

Conscious AI: Conflations and Comparisons

Consciousness, like intelligence, is multi-faceted. This makes the future of AI more unpredictable and potentially even more hazardous.

When considering AIs that might be conscious, the first great conflation is to fail to distinguish between intelligence and consciousness. The suggestion is that something which is as generally intelligent as a human is bound to be as conscious as a human. General intelligence and consciousness are both intrinsic features of an advanced mind, right?

Well, no. Of course not. There’s no fundamental necessity for these two characteristics to be tightly bound together. A chatbot can provide a human companion with sparkling conversation without having its own inner sparkle of feeling. Like an actor, it can mimic expressions of emotional highs and lows whilst lacking any interior passion. It can talk persuasively about having an inner life without there being any inside inside.

What babies can teach AI

Researchers at Google DeepMind tried to teach an AI system to have that same sense of “intuitive physics” by training a model that learns how things move by focusing on objects in videos instead of individual pixels. They trained the model on hundreds of thousands of videos to learn how an object behaves. If babies are surprised by something like a ball suddenly flying out of the window, the theory goes, it is because the object is moving in a way that violates the baby’s understanding of physics. The researchers at Google DeepMind managed to get their AI system, too, to show “surprise” when an object moved differently from the way it had learned that objects move.

Yann LeCun, a Turing Prize winner and Meta’s chief AI scientist, has argued that teaching AI systems to observe like children might be the way forward to more intelligent systems. He says humans have a simulation of the world, or a “world model,” in our brains, allowing us to know intuitively that the world is three-dimensional and that objects don’t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few seconds’ time. He’s busy building entirely new architectures for AI that take inspiration from how humans learn. We covered his big bet for the future of AI here.

The AI systems of today excel at narrow tasks, such as playing chess or generating text that sounds like something written by a human. But compared with the human brain—the most powerful machine we know of—these systems are brittle. They lack the sort of common sense that would allow them to operate seamlessly in a messy world, do more sophisticated reasoning, and be more helpful to humans. Studying how babies learn could help us unlock those abilities.

AI House Davos

Embodied AI refers to AI integrated into physical systems that can perceive, reason, and act in the real world through sensors and actuators, like robots and autonomous vehicles. This fireside conversation explores how advances in AI like vision–language–action models are redefining what machines can understand and do, especially as we move from navigation to mobile manipulation. The speakers discuss how quickly today’s rapid progress in AI might transfer to robotics and embodied systems, and how soon we can expect to see these technologies making a tangible impact on our daily lives.

Speakers.
Yann LeCun (Advanced Machine Intelligence, Founder and Executive Chairman)
Marc Pollefeys (ETH Zürich and Faculty, ETH AI Center, Professor)

© AI House Davos 2026
Founders & Strategic Partners:
ETH AI Center, Merantix, G42, Hewlett Packard Enterprise, EPFL AI Center, The University of Tokyo.

Presenting Partners:
KPMG.

Human-Centric Intelligence: A New Paradigm For AI Decision Making

In my latest Forbes article, I explore one of the most critical questions facing leaders today:

How do we use AI to augment human intelligence rather than diminish it?

AI’s true power isn’t about automation alone—it’s about amplifying human judgment, creativity, and decision-making.

#AI #HumanCentricAI #artificialintelligence #tech #AugmentedIntelligence #Forbes #Leadership #Cybersecurity #EmergingTechnology #DigitalTransformation


Human-centric AI is the new frontier; it is not AI against human intelligence, but AI with human intelligence.

AI in Charge: Large-Scale Experimental Evidence on Electric Vehicle Charging Demand

Asynchronous firing and off states in working memory maintenance.


Mozumder, Wang et al. use high-density recordings in macaque prefrontal and parietal cortex to show that working memory is sustained by asynchronous spiking activity without prolonged silent periods. Off states are characterized by relatively decreased information decoding and are synchronized between areas. The balance between asynchronous firing and off states determines memory maintenance.

Elon Musk Holds Surprise Talk At The World Economic Forum In Davos

The musk blueprint: navigating the supersonic tsunami to hyperabundance when exponential curves multiply: understanding the triple acceleration.

On January 22, 2026, Elon Musk sat down with BlackRock CEO Larry Fink at the World Economic Forum in Davos and delivered what may be the most important articulation of humanity’s near-term trajectory since the invention of the internet.

Not because Musk said anything fundamentally new—his companies have been demonstrating this reality for years—but because he connected the dots in a way that makes the path to hyperabundance undeniable.

[Watch Elon Musk’s full WEF interview]

This is not visionary speculation.

This is engineering analysis from someone building the physical infrastructure of abundance in real-time.

/* */