Toggle light / dark theme

As humans and other animals navigate their surroundings and experience different things, their brain creates so-called cognitive maps, which are internal representations of environments or tasks. These mental maps are eventually generalized into schemas, frameworks that organize information acquired through experience and can later guide decision-making.

Various past neuroscience and psychology studies have tried to better understand the neural processes and brain regions that support the formation of these internal representations. Insight into these mechanisms could, in turn, shed light on the underpinnings of learning and decision-making.

Two brain regions that have been found to play a role in forming internal representations of experiences are the (OFC) and the hippocampus (HC). Among other functions, the OFC supports reward-based learning and decision-making. At the same time, the HC contributes to spatial navigation and the formation and retrieval of memories.

In a network, pairs of individual elements, or nodes, connect to each other; those connections can represent a sprawling system with myriad individual links. A hypergraph goes deeper: It gives researchers a way to model complex, dynamical systems where interactions among three or more individuals—or even among groups of individuals—may play an important part.

Instead of edges that connect pairs of nodes, it is based on hyperedges that connect groups of nodes. Hypergraphs can represent higher-order interactions that represent collective behaviors like swarming in fish, birds, or bees, or processes in the brain.

Scientists usually use a hypergraph to predict dynamic behaviors. But the opposite problem is interesting, too. What if researchers can observe the dynamics but don’t have access to a reliable model? Yuanzhao Zhang, an SFI Complexity Postdoctoral Fellow, has an answer.

Scientists at the University of California, Berkeley, and Boise State University have found evidence suggesting that the Marinoan glaciation began approximately 639 million years ago and lasted for approximately 4 million years. In their study published in the Proceedings of the National Academy of Sciences, the group used drone and field imagery along with isotopic dating of glacial deposits to learn more about global glaciation events during the Neoproterozoic Era.

Prior research has shown that during the early days of the planet, during the Neoproterozoic Era, Earth underwent two ice ages. The first, known as the Sturtian glaciation, lasted approximately 56 million years and covered the entire planet with ice. Less is known about the second event, called the Marinoan glaciation. In this new effort, the research team set themselves the task of figuring out when it began and how long it lasted.

The work involved sending drones over a part of Namibia, where prior research has uncovered evidence of glacial activity during the Marinoan. This allowed the team to map that were stacked up in a way that showed little vertical shift had occurred, which meant the glaciers did not move much during the time they were there. Additional field imagery helped confirm what the team found in the images.

Network alignment is a fundamental problem in several domains that aims at mapping nodes across networks. Here, the authors develop a probabilistic approach that assumes that observed networks are errorful copies from a blueprint. The method samples the distribution of alignments, improving accuracy and enabling potential applications.

A new AI robot called π-0.5 uses 100 decentralized brains, known as π-nodes, to control its body with lightning-fast reflexes and smart, local decision-making. Instead of relying on a central processor or internet connection, each part of the robot—like fingers, joints, and muscles—can sense, think, and act independently in real time. Powered by a powerful vision-language-action model and trained on massive, diverse data, this smart muscle system allows the robot to understand and complete real-world tasks in homes, even ones it has never seen before.

Join our free AI content course here 👉 https://www.skool.com/ai-content-acce… the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/ 🔍 What’s Inside: •⁠ ⁠A groundbreaking AI robot called π‑0.5 powered by 100 decentralized “π-nodes” embedded across its body •⁠ ⁠Each node acts as a mini-brain, sensing, deciding, and adjusting without needing Wi-Fi or a central processor •⁠ ⁠A powerful vision-language-action model lets the robot understand messy homes and complete complex tasks without pre-mapping 🎥 What You’ll See: •⁠ ⁠How π‑0.5 combines local reflexes with high-level planning to react in real time •⁠ ⁠The unique training process using over 400 hours of diverse, real-world data from homes, mobile robots, and human coaching •⁠ ⁠Real-world tests where the robot cleans, organizes, and adapts to brand-new spaces with near-human fluency 📊 Why It Matters: This new system redefines robot intelligence by merging biological-inspired reflexes with advanced AI planning. It’s a major step toward robots that can handle unpredictable environments, learn on the fly, and function naturally in everyday life—without relying on cloud servers or rigid programming. DISCLAIMER: This video explores cutting-edge robotics, decentralized AI design, and real-world generalization, revealing how distributed intelligence could transform how machines move, sense, and think. #robot #robotics #ai.

Get the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/

🔍 What’s Inside:
• ⁠ ⁠A groundbreaking AI robot called π‑0.5 powered by 100 decentralized “π-nodes” embedded across its body.
• ⁠ ⁠Each node acts as a mini-brain, sensing, deciding, and adjusting without needing Wi-Fi or a central processor.
• ⁠ ⁠A powerful vision-language-action model lets the robot understand messy homes and complete complex tasks without pre-mapping.

🎥 What You’ll See:
• ⁠ ⁠How π‑0.5 combines local reflexes with high-level planning to react in real time.
• ⁠ ⁠The unique training process using over 400 hours of diverse, real-world data from homes, mobile robots, and human coaching.
• ⁠ ⁠Real-world tests where the robot cleans, organizes, and adapts to brand-new spaces with near-human fluency.

📊 Why It Matters:

The study is “really pretty remarkable,” said Christopher Whyte at the University of Sydney, who was not involved in the work, to Nature. One of the first to simultaneously record activity in both deep and surface brain regions in humans, it reveals how signals travel across the brain to support consciousness.

Consciousness has teased the minds of philosophers and scientists for centuries. Thanks to modern brain mapping technologies, researchers are beginning to hunt down its neural underpinnings.

At least half a dozen theories now exist, two of which are going head-to-head in a global research effort using standardized tests to probe how awareness emerges in the human brain. The results, alongside other work, could potentially build a unified theory of consciousness.

Using the Australian Square Kilometer Array Pathfinder (ASKAP), astronomers have discovered 15 new giant radio galaxies with physical sizes exceeding 3 million light years. The finding was reported in a research paper published April 9 on the arXiv preprint server.

The so-called giant radio galaxies (GRGs) have an overall projected linear length exceeding at least 2.3 million light years. They are rare objects grown usually in low-density environments and display jets and lobes of synchrotron-emitting plasma. GRGs are important for studying the formation and the evolution of radio sources.

ASKAP is a 36-dish radio-interferometer operating at 700 to 1,800 MHz. It uses to achieve extremely high survey speed, making it one of the best instruments in the world for mapping the sky at radio wavelengths. Due to its large field of view, high resolution, and good sensitivity to low-surface brightness structures, ASKAP has been essential in the search for new GRGs.

A new brain-inspired AI model called TopoLM learns language by organizing neurons into clusters, just like the human brain. Developed by researchers at EPFL, this topographic language model shows clear patterns for verbs, nouns, and syntax using a simple spatial rule that mimics real cortical maps. TopoLM not only matches real brain scans but also opens new possibilities in AI interpretability, neuromorphic hardware, and language processing.

Join our free AI content course here 👉 https://www.skool.com/ai-content-acce… the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/ 🔍 What’s Inside: •⁠ ⁠A brain-inspired AI model called TopoLM that learns language by building its own cortical map •⁠ ⁠Neurons are arranged on a 2D grid where nearby units behave alike, mimicking how the human brain clusters meaning •⁠ ⁠A simple spatial smoothness rule lets TopoLM self-organize concepts like verbs and nouns into distinct brain-like regions 🎥 What You’ll See: •⁠ ⁠How TopoLM mirrors patterns seen in fMRI brain scans during language tasks •⁠ ⁠A comparison with regular transformers, showing how TopoLM brings structure and interpretability to AI •⁠ ⁠Real test results proving that TopoLM reacts to syntax, meaning, and sentence structure just like a biological brain 📊 Why It Matters: This new system bridges neuroscience and machine learning, offering a powerful step toward *AI that thinks like us. It unlocks better interpretability, opens paths for **neuromorphic hardware*, and reveals how one simple principle might explain how the brain learns across all domains. DISCLAIMER: This video covers topographic neural modeling, biologically-aligned AI systems, and the future of brain-inspired computing—highlighting how spatial structure could reshape how machines learn language and meaning. #AI #neuroscience #brainAI

Get the best AI news without the noise 👉 https://airevolutionx.beehiiv.com/

🔍 What’s Inside:
• ⁠ ⁠A brain-inspired AI model called TopoLM that learns language by building its own cortical map.
• ⁠ ⁠Neurons are arranged on a 2D grid where nearby units behave alike, mimicking how the human brain clusters meaning.
• ⁠ ⁠A simple spatial smoothness rule lets TopoLM self-organize concepts like verbs and nouns into distinct brain-like regions.

🎥 What You’ll See:
• ⁠ ⁠How TopoLM mirrors patterns seen in fMRI brain scans during language tasks.
• ⁠ ⁠A comparison with regular transformers, showing how TopoLM brings structure and interpretability to AI
• ⁠ ⁠Real test results proving that TopoLM reacts to syntax, meaning, and sentence structure just like a biological brain.

📊 Why It Matters: