Toggle light / dark theme

Methane’s Elaborate Phases and Where to Find Them

A systematic exploration of the phase diagram of methane resolves inconsistencies of earlier studies, with potential ramifications for our understanding of planetary interiors.

As a gas, methane is very simple. But as a liquid and as a solid, it is perplexingly complex. Ambiguity has long plagued our observations and measurements of its structure at different pressure–temperature combinations. Yet, understanding methane’s phase diagram is vital for predicting its behavior deep within our and other planets. In a tour de force contribution Mengnan Wang at the University of Edinburgh in the UK and her colleagues have now charted the turbulent seas of the methane phase diagram [1]. By comprehensively mapping its phases and melting curve, they have resolved the legion of discrepancies of earlier studies.

Methane—one of the simplest of all molecules—is sometimes the subject of flatulence jokes (of which it is odorlessly innocent) but is also a powerful driver of climate change on Earth (of which it is very guilty [2]). The extraction of gaseous methane from Earth drives multibillion-dollar industries, which use the molecule both as a fuel and as a source of hydrogen. Out in the Solar System, methane in planetary atmospheres absorbs red light, which makes Uranus and Neptune shine blue, while icy methane damaged by radiation paints dwarf planets red.

Chaos shapes how meandering rivers change over time, research shows

Rivers are rarely the calm, orderly streams we imagine on maps. Over time, their winding paths—called meanders—shift, bend, and occasionally snap off in sudden “cutoff” events that shorten loops and reshape the landscape. While scientists have long suspected that such cutoffs inject a dose of unpredictability into river evolution, a new study published in Communications Earth & Environment demonstrates that these abrupt events are, by themselves, enough to produce chaos in river channels.

Harvard Ph.D. candidate Brayden Noh and NYU Tandon Assistant Professor Omar Wani used a widely used computational model to explore how meandering rivers evolve over time. This model isolates the essential dynamics: bends migrate laterally in proportion to curvature, and loops are occasionally severed through cutoffs. Other real-world complexities—like sediment transport, bank composition, and vegetation—are treated as secondary, allowing the researchers to focus squarely on the geometry-driven behavior of rivers.

To test the role of cutoffs, the team simulated rivers starting from nearly identical initial shapes, then introduced infinitesimally small perturbations to each of the multiple copies. They tracked how the channels diverged over time by mapping their evolving shapes onto a fixed grid and measuring differences cell by cell. In a striking counterfactual experiment, when cutoffs were disabled, the two channels stayed nearly identical over large time horizons. When cutoffs were allowed, even tiny initial differences grew exponentially, a hallmark of deterministic chaos.

AI search robot uses 3D maps and internet knowledge to find lost items

A robot that can locate lost items on command, the latest development at the Technical University of Munich (TUM), combines knowledge from the internet with a spatial map of its surroundings to efficiently find the objects being sought. The new robot from Prof. Angela Schoellig’s TUM Learning Systems and Robotics Lab looks like a broomstick on wheels with a camera mounted at the top. It is one of the first robots that not only integrates image understanding but also applies it to a clearly defined task.

To find a pair of glasses misplaced in the kitchen, for example, the robot has to look around and build a three-dimensional image of the room. The camera initially provides two-dimensional images, but these pixels also contain depth information. This creates a spatial map of the environment that is accurate to the centimeter and is constantly updated. A laptop also provides the robot with information about which objects are visible in the image and what significance they have for humans.

“We have taught the robot to understand its surroundings,” says Prof. Schoellig. The head of the Robotics Lab at the TUM Chair of Safety, Performance and Reliability for Learning Systems aims to develop robots that can navigate any environment independently. Humanoid robots working in factories or robots in care settings in private homes require this newly developed basic understanding, which, as Schoellig explains, “is important for all robots that move in spaces that are constantly changing.” A paper introducing the technology is published in the journal IEEE Robotics and Automation Letters.

A foundation model of vision, audition, and language for in-silico neuroscience

‘The present results strengthen the possibility of a paradigm shift in neuroscience… moving from the fragmented mapping of isolated cognitive tasks toward the use of unified, predictive foundation models of brain and cognitive functions By aligning the representations of Al systems to those of the human brain, we demonstrate that a single architecture can integrate a vast range of fMRI responses across hundreds of individuals, extending the framework that led the 2025 Algonauts competition. The observed log-linear scaling of encoding accuracy mirroring power laws in both artificial intelligence and neuroscience suggests that the ceiling for predicting human brain activity is yet to be reached.’


Cognitive neuroscience is fragmented into specialized models, each tailored to specific experimental paradigms, hence preventing a unified model of cognition in the human brain. Here, we introduce TRIBE v2, a tri-modal (video, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy. Critically, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms, it recovers a variety of results established by decades of empirical research.

Building a National Quantum Strategy

Andrea Damascelli has always been fascinated by light. He uses it to probe materials on an atomic level, and his observations have contributed to the condensed-matter community’s understanding of high-temperature superconductors and quantum materials. His research group at the University of British Columbia (UBC) uses time-, spin-, and angle-resolved photoemission spectroscopy, an intricate technique that maps the energy and velocity of electrons as they propagate through materials.

In 2015, Damascelli spearheaded efforts that brought one of the first Canada First Research Excellence Fund (CFREF) grants to UBC’s Quantum Matter Institute. As the institute’s scientific director, he found himself at the helm of a full-blown research center—hiring faculty, expanding staff, and upgrading facilities. A few months later, he received a special request from Canada’s National Research Council: join leaders from across Canada’s quantum ecosystem to advise on a strategy for growing the country’s quantum community as a whole.

Physics Magazine chatted with Damascelli as he looked back on the beginning of Canada’s first National Quantum Strategy (NQS) and looked forward to developing a self-sustaining quantum research and training powerhouse.

Technical Advance alert 🙌

https://doi.org/10.1172/jci.insight.

In this Research article, Benjamin D. Philpot & team establish a multimodal dual-reporter mouse that accelerates AngelmanSyndrome therapeutic development through scalable cell-based screening, high-resolution whole-brain mapping, non-invasive live imaging, and sorting neurons with unsilenced paternal Ube3a.


2Animal Models Core.

3Department of Genetics, and.

4Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.

The E3-ome gene-centric compendium reveals the human E3 ligase landscape

Now online! The E3-ome defines the human repertoire of ubiquitin E3 ligases, creating a unified resource that maps their diversity across the ubiquitin and ubiquitin-like systems. By consolidating fragmented knowledge, this framework provides a foundation for studying ubiquitin signaling and accelerating discovery.

Wind-powered robot could enable long-term exploration of hostile environments

Researchers at Cranfield University have created WANDER-bot, a low-cost, 3D-printed robot that is powered by wind energy. Designed to spend long durations in hostile, windy environments such as certain deserts, polar regions or even other planets, WANDER-bot doesn’t need a battery to power movement, enabling longer operations without having to pause and recharge.

Movement accounts for around 20% of battery use in most robots, so running on natural energy makes WANDER-bot an efficient solution for long-term exploration or mapping of unknown terrains. As a result, any electronic elements added to future versions for data collection or transmission purposes could have their own smaller, lighter power source. Using natural energy also counters the issue of performance degradation over time in traditional power sources, such as solar cells and radioisotope thermoelectric generators.

Designed by Dr. Saurabh Upadhyay and Sam Kurian, Research Associate in Space Engineering, the robot uses parts that are entirely 3D printed, with the design deliberately simple to allow for quick repair and replacement. This means that, in theory, you could print and construct WANDER-bot anywhere and make replacement parts in situ as needed, removing the need for time-consuming and costly resupply missions.

Synaptic connectivity alone can reveal neuron types

Recent technological advances facilitate the reconstruction of complete brain connectomes in small organisms and partial connectomes in mammals, involving the mapping of the network of neurons and synaptic connections. Accurate cell typing of these connectomes aids in interpreting circuit functions and comparing brain organization across species.

Traditionally, cell typing relied on manual morphological classification by experts—a slow process that required detailed anatomical information. However, morphology can be deceptive or inadequate in many brain regions, especially in circuits with repeated cell types, where neurons can share very similar morphology despite differing in connectivity.

/* */