Toggle light / dark theme

Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations—it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.

“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that aren’t stored in single brain cells. “Memory storage and are dynamic processes that occur over entire networks of neurons.”

In 1982, physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks—the Hopfield network—known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024.

A mathematician has developed an algebraic solution to an equation that was long thought to be unsolvable. A groundbreaking discovery from a UNSW Sydney mathematician may finally offer a solution to one of algebra’s toughest problems: how to solve high-degree polynomial equations. Polynomials

I was just thinking about the 1871 book Through the Looking-Glass, and What Alice Found There by mathematician, logician, Anglican deacon, writer, and photographer, the Reverend Charles Lutwidge Dodgson (a.k.a. Lewis Carroll).

Lewis Carroll’s Alice’s Adventures in Wonderland and Through the Looking-Glass continue to influence us today, not just as beloved children’s stories but as enduring works that challenge the boundaries of logic, language, and imagination.

At their heart, both books are filled with logical conundrums, puzzling paradoxes, and playful reasoning, reflecting Charles’ background in math and logic. He employed nonsensical situations and absurd dialogues to explore profound ideas about meaning, identity, time, and even mathematics, all disguised within fantastical storytelling.

Humans tend to put our own intelligence on a pedestal. Our brains can do math, employ logic, explore abstractions, and think critically. But we can’t claim a monopoly on thought. Among a variety of nonhuman species known to display intelligent behavior, birds have been shown time and again to have advanced cognitive abilities. Ravens plan for the future, crows count and use tools, cockatoos open and pillage booby-trapped garbage cans, and chickadees keep track of tens of thousands of seeds cached across a landscape. Notably, birds achieve such feats with brains that look completely different from ours: They’re smaller and lack the highly organized structures that scientists associate with mammalian intelligence.

“A bird with a 10-gram brain is doing pretty much the same as a chimp with a 400-gram brain,” said Onur Güntürkün, who studies brain structures at Ruhr University Bochum in Germany. “How is it possible?”

Researchers have long debated about the relationship between avian and mammalian intelligences. One possibility is that intelligence in vertebrates—animals with backbones, including mammals and birds—evolved once. In that case, both groups would have inherited the complex neural pathways that support cognition from a common ancestor: a lizardlike creature that lived 320 million years ago, when Earth’s continents were squished into one landmass. The other possibility is that the kinds of neural circuits that support vertebrate intelligence evolved independently in birds and mammals.

Computer simulations help materials scientists and biochemists study the motion of macromolecules, advancing the development of new drugs and sustainable materials. However, these simulations pose a challenge for even the most powerful supercomputers.

A University of Oregon graduate student has developed a new mathematical equation that significantly improves the accuracy of the simplified computer models used to study the motion and behavior of large molecules such as proteins, and synthetic materials such as plastics.

The breakthrough, published last month in Physical Review Letters, enhances researchers’ ability to investigate the motion of large molecules in complex biological processes, such as DNA replication. It could aid in understanding diseases linked to errors in such replication, potentially leading to new diagnostic and therapeutic strategies.

For years, quantum computing has been the tech world’s version of “almost there”. But now, engineers at MIT have pulled off something that might change the game. They’ve made a critical leap in quantum error correction, bringing us one step closer to reliable, real-world quantum computers.

In a traditional computer, everything runs on bits —zeroes and ones that flip on and off like tiny digital switches. Quantum computers, on the other hand, use qubits. These are bizarre little things that can be both 0 and 1 at the same time, thanks to a quantum property called superposition. They’re also capable of entanglement, meaning one qubit can instantly influence another, even at a distance.

All this weirdness gives quantum computers enormous potential power. They could solve problems in seconds that might take today’s fastest supercomputers years. Think of it like having thousands of parallel universes doing your math homework at once. But there’s a catch.

While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.

MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.

Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.

A team of researchers at Nagoya University has discovered something surprising. If you have two tiny vibrating elements, each one barely moving on its own, and you combine them in the right way, their combined vibration can be amplified dramatically—up to 100 million times.

The paper is published in the Chaos: An Interdisciplinary Journal of Nonlinear Science.

Their findings suggest that by relying on structural amplification rather than power, even small, simple devices can transmit long-distance clear signals, potentially innovating long-distance communications and remote medical devices.

Hardships in childhood could have lasting effects on the brain, new research shows, with adverse events such as family conflict and poverty potentially affecting cognitive function in kids for several years afterwards.

This study, led by a team from Brigham and Women’s Hospital in Massachusetts, looked specifically at white matter: the deeper tissue in the brain, made up of communication fibers ferrying information between neurons.

“We found that a range of adversities is associated with lower levels of fractional anisotropy (FA), a measure of white matter microstructure, throughout the whole brain, and that this is associated with lower performance on mathematics and language tasks later on,” write the researchers in their published paper.