Toggle light / dark theme

First 3D reconstruction of the face of ‘Little Foot’ completed

Identified as the most complete Australopithecus fossil discovered to date, “Little Foot” was buried in sediments whose movement and weight caused fractures and deformations, making analysis of its skull—and more particularly its face—difficult. This anatomical region, which is essential for understanding the adaptations of our ancestors and relatives to their environment, has now been virtually reconstructed for the first time by a CNRS researcher and her British and South African colleagues. These are published in Comptes Rendus Palevol.

A comparative analysis of this reconstruction with several extant great apes and three other Australopithecus specimens reveals that the face of “Little Foot” is closer in terms of size and morphology to Australopithecus specimens from eastern Africa than to those from southern Africa. This finding raises questions about the relationships between these different populations and about the chronology of the evolutionary processes that reshaped the faces of these hominins, particularly the orbital region, which appears to have been subject to strong selective pressures.

The skull was first transported to the Diamond Light Source synchrotron (United Kingdom), where it was carefully digitized. The research team then virtually isolated the bone fragments using semi-automated methods and supercomputers. Their realignment resulted in a 3D reconstruction with a resolution of 21 microns. More than five years were required to complete this reconstruction.

How does a developing brain self-organize? Cell lineage may guide neuron placement

Your brain begins as a single cell. When all is said and done, it will house an incredibly complex and powerful network of some 170 billion cells. How does it organize itself along the way? Cold Spring Harbor Laboratory neuroscientists have come up with a surprisingly simple answer that could have far-reaching implications for biology and artificial intelligence.

Stan Kerstjens, a postdoc in Professor Anthony Zador’s lab, frames the question in terms of positional information. “The only thing a cell ‘sees’ is itself and its neighbors,” he explains. “But its fate depends on where it sits. A cell in the wrong place becomes the wrong thing, and the brain doesn’t develop right. So, every cell must solve two questions: Where am I? And who do I need to become?”

In a study published in Neuron, Kerstjens, Zador, and colleagues at Harvard University and ETH Zürich put forward a new theory for how the brain organizes itself during development.

Foundation AI model uses MRI data to predict multiple brain disorders

Artificial intelligence (AI) systems are computational models that can learn to identify patterns in data, make accurate predictions or generate content (e.g., texts, images, videos or sound recordings). These models can reliably complete various tasks and are now also used to carry out research rooted in different fields.

Over the past few decades, some AI models have proved promising for the early diagnosis and study of specific diseases or neuropsychiatric conditions. For instance, by analyzing large amounts of brain scans collected using a noninvasive technique known as magnetic resonance imaging (MRI), AI could uncover patterns associated with tumors, strokes and neurodegenerative diseases, which could help to diagnose these conditions.

Researchers at Mass General Brigham, Harvard Medical School and other institutes recently developed Brain Imaging Adaptive Core (BrainIAC), a large AI system pre-trained on a vast pool of MRI data that could be adapted to tackle different tasks. This foundation model, presented in a paper published in Nature Neuroscience, was found to outperform many models that were trained to complete specific medical or neuroscience-related tasks.

How close are we to true AI?

Understanding consciousness is the ultimate prize for creators of artificial intelligence. Nevertheless, consciousness theory will also shape how we view ourselves and our place in the world. Although AI systems can mimic human reasoning, they can only regurgitate the input data. They are sophisticated pattern recognizers and content remixers, but cannot step beyond the limitations of the input. Understanding consciousness would enable us to transition from synthetic to synthesis, unlocking unlimited potential.

Computer scientists hope that recurrent computation will somehow ‘awaken’ code to consciousness. Yet the spectacular achievements of large language and diffusion models have not moved beyond imitation. We train models on the outputs of consciousness—our language, our art, our logic—while remaining entirely ignorant of the process that produces them. An AI can write a gut-wrenching paragraph about sadness by replicating patterns, vocabulary, and syntax. But it knows nothing of grief. It can create a shadow play, yet knows nothing of the object that casts it. This imitation, while impressive, should not be mistaken for a proper understanding of consciousness. No amount of coloring can turn the shadow into a solid object.

To reverse-engineer the mind, we need a blueprint. The pressing need to advance AI is a physicalist theory of consciousness, the architecture of subjective experience itself. The Fermionic Mind Hypothesis (FMH) is such a physicalist framework. It posits that selfhood is structurally and functionally analogous to a fermion in physics. The self’s persistent core operates as an energy-regulating system, maintaining mental equilibrium through continuous thermodynamic cycles. Within this cycle, cognitive processes such as decision-making are wave-particle transitions that capture the inherent nondeterminism and contextual collapse of probabilistic mental states.

Roman Yampolskiy — AI: Unexplainable, Unpredictable, Uncontrollable

In this presentation, Dr. Roman V. Yampolskiy provides a rigorous examination of the fundamental limitations of Artificial Intelligence, arguing that as systems approach and surpass human-level intelligence, they become inherently unexplainable, unpredictable, and uncontrollable. He illustrates how the black box nature of deep learning prevents full audits of decision-making, while concepts like computational irreducibility suggest we cannot forecast the actions of a smarter agent without running it – often until it is too late for safety. He asserts that there is currently no evidence or mathematical proof to guarantee that a superintelligent system can be safely contained or aligned with human values.
Dr. Yampolskiy further bridges theoretical computer science with safety engineering by applying impossibility results, such as the Halting Problem and Rice’s Theorem, to demonstrate that certain safety guarantees for Artificial General Intelligence (AGI) are mathematically unreachable. These technical impediments lead to a sobering discussion on existential risk, where the inability to verify or monitor advanced systems results in an alarmingly high probability of catastrophic outcomes. By analysing why advanced AI defies traditional engineering safety standards, he makes the case that current trajectories may lead to irreversible consequences for humanity.
To conclude, the talk shifts toward potential pathways for mitigation, emphasising the urgent need to prioritise specialised, narrow AI over the pursuit of general superintelligence. Dr. Yampolskiy argues that while narrow AI can solve global challenges within controllable parameters, the pursuit of AGI represents an existential gamble. He calls for a shift in the research community from a “move fast and break things” mentality to a mathematically grounded approach, urging that we must prove a problem is solvable before investing billions into its deployment.

Whole Brain Emulation & Substrate-Independence: New Beginnings For Old Minds

When a human mind can be emulated — memories, habits, and the weather of thought running on engineered hardware — “uploading” stops being an ending and becomes a beginning. Substrate-independent minds can be backed up, restored, paused without time passing, and deployed into new bodies: telepresence robots, swarms, or chassis built for heat and radiation. Distance turns into bandwidth as consciousness moves as data, bound only by light. Under the spectacle is a harder, technical question: what must be captured, at what scale, for an emulation to be someone — and what rights and power follow once persons are portable infrastructure?

Mind uploading has usually been told as a one-way escape hatch: a last-minute transfer from a failing body into a machine, the technological equivalent of outrunning a deadline. That framing makes the idea feel like a hospice fantasy — dramatic, personal, terminal. But it leaves out the second verb that changes everything. If a mind can be reproduced as a running process, it isn’t just uploaded once; it can be instantiated again, moved, paused, restored, and redeployed. Uploading is capture. Downloading is what makes a mind into something mobile.

The phrase “substrate-independent mind” tries to name that mobility without the melodrama. A substrate is the medium a mind runs on: biological tissue, silicon, specialized hardware, something not yet invented. Independence doesn’t mean the mind floats free of physics; it means the same meaningful mental functions might be implementable on different platforms, like a program that can run on different computers. The promise is not that neurons are irrelevant, but that the mind might be the pattern of information processing the neurons carry out — the thing they do, not the stuff they’re made of.

The language of the unconscious

“The unconscious is structured like a language,” argued psychoanalyst Jacques Lacan.

And now, with the rise of AI-generated video and audio, Lacan’s thinking has taken an unexpected twist.

Might AI therefore capture something key about the human unconscious?

Join leading Lacanian philosopher and collaborator of Slavoj Žižek, Alenka Zupančič, as she argues that AI shows the unconscious is structured like a large language model.

REPLACED BY AI! | Seedance 2 + Kling 3.0 Short Film

The increasing use of Artificial Intelligence (AI) in the workplace is leading to job displacement and raising concerns among employees about the security of their positions ## Key Insights.

Career Obsolescence Through AI

🔄 AI engineer David becomes obsolete after 7 years and 1,000 lines of code building the AI division, receiving a “sweet pink slip” as the CEO eliminates his role and takes his company car while AI assumes control of the entire division.

Existential Work Motivation.

💭 David questions whether his 7-year dedication was driven by glory, stock options, passion, art, or simply maintaining purpose (“beating heart”), confronting the irony of being replaced by the AI system he built.

Corporate Restructuring Mechanics.

/* */