Toggle light / dark theme

Shortform link:
https://shortform.com/artem.

My name is Artem, I’m a computational neuroscience student and researcher.

In this video we will talk about the fundamental role of lognormal distribution in neuroscience. First, we will derive it through Central Limit Theorem, and then explore how it support brain operations on many scales — from cells to perception.

REFERENCES:

1. Buzsáki, G. & Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nat Rev Neurosci 15264–278 (2014).
2. Ikegaya, Y. et al. Interpyramid Spike Transmission Stabilizes the Sparseness of Recurrent Network Activity. Cerebral Cortex 23293–304 (2013).
3. Loewenstein, Y., Kuras, A. & Rumpel, S. Multiplicative Dynamics Underlie the Emergence of the Log-Normal Distribution of Spine Sizes in the Neocortex In Vivo. Journal of Neuroscience 31, 9481–9488 (2011).
4. Morales-Gregorio, A., van Meegen, A. & van Albada, S. J. Ubiquitous lognormal distribution of neuron densities across mammalian cerebral cortex. http://biorxiv.org/lookup/doi/10.1101/2022.03.17.480842 (2022) doi:10.1101/2022.03.17.480842.

OUTLINE:

In vitro biological neural networks (BNNs) interconnected with robots, so-called BNN-based neurorobotic systems, can interact with the external world, so that they can present some preliminary intelligent behaviors, including learning, memory, robot control, etc.

This work aims to provide a comprehensive overview of the intelligent behaviors presented by the BNN-based neurorobotic systems, with a particular focus on those related to robot intelligence.

In this work, we first introduce the necessary biological background to understand the 2 characteristics of the BNNs: nonlinear computing capacity and network plasticity. Then, we describe the typical architecture of the BNN-based neurorobotic systems and outline the mainstream techniques to realize such an architecture from 2 aspects: from robots to BNNs and from BNNs to robots.

Researchers have developed a new model inspired by recent biological discoveries that shows enhanced memory performance. This was achieved by modifying a classical neural network.

Computer models play a crucial role in investigating the brain’s process of making and retaining memories and other intricate information. However, constructing such models is a delicate task. The intricate interplay of electrical and biochemical signals, as well as the web of connections between neurons and other cell types, creates the infrastructure for memories to be formed. Despite this, encoding the complex biology of the brain into a computer model for further study has proven to be a difficult task due to the limited understanding of the underlying biology of the brain.

Researchers at the Okinawa Institute of Science and Technology (OIST) have made improvements to a widely utilized computer model of memory, known as a Hopfield network, by incorporating insights from biology. The alteration has resulted in a network that not only better mirrors the way neurons and other cells are connected in the brain, but also has the capacity to store significantly more memories.

Studying the relationship between the arrangement of water molecules incorporated into layered materials like clays and the arrangement of ions within these materials has been a difficult experiment to conduct.

However, researchers have now succeeded in observing these interactions for the first time by utilizing a technique commonly used for measuring extremely small masses and molecular interactions at the nanoscale.

The nanoscale refers to a length scale that is extremely small, typically on the order of nanometers (nm), which is one billionth of a meter. At this scale, materials and systems exhibit unique properties and behaviors that are different from those observed at larger length scales. The prefix “nano-” is derived from the Greek word “nanos,” which means “dwarf” or “very small.” Nanoscale phenomena are relevant to many fields, including materials science, chemistry, biology, and physics.

Regeneration, Resuscitation & Biothreat Countermeasures — Commander Dr. Jean-Paul Chretien, MD, Ph.D., Program Manager, Biological Technology Office, DARPA


Commander Dr. Jean-Paul Chretien, MD, Ph.D. (https://www.darpa.mil/staff/cdr-jean-paul-chretien) is a Program Manager in the Biological Technology Office at DARPA, where his research interests include disease and injury prevention, operational medicine, and biothreat countermeasures. He is also responsible for running the DARPA Triage Challenge (https://triagechallenge.darpa.mil/).

Prior to coming to DARPA, CDR Dr. Chretien led the Pandemic Warning Team at the Defense Intelligence Agency’s National Center for Medical Intelligence, and as a naval medical officer, his previous assignments include senior policy advisor for biodefense in the White House Office of Science and Technology Policy; team lead for Innovation & Evaluation at the Armed Forces Health Surveillance Branch; and director of force health protection for U.S. and NATO forces in southwestern Afghanistan.

A proud mentor to nine graduate students and Oak Ridge Institute for Science and Education (ORISE) fellows, CDR Dr. Chretien received the Rising Star Award from the American College of Preventive Medicine, Best Publication of the Year Award from the International Society for Disease Surveillance, and Skelton Award for Public Service from the Harry S. Truman Scholarship Foundation. He has published over 50 peer-reviewed journal articles and 10 book chapters.

CDR Dr. Chretien earned a Bachelor of Science degree in political science from the United States Naval Academy, Master of Health Science in biostatistics and Doctor of Philosophy in genetic epidemiology degrees from the Johns Hopkins Bloomberg School of Public Health, and a Doctor of Medicine degree from the Johns Hopkins University School of Medicine. He completed his residency in general preventive medicine at the Walter Reed Army Institute of Research and fellowship in health sciences informatics at the Johns Hopkins University School of Medicine.

Here’s another thing I have changed my mind on. Well, sort of. I used to make fun of “vitalism” and trade insults with my favorite archenemy Dale Carrico. Now I must repent or at least add important qualifications.

Vitalism is currently defined by Wikipedia as “the belief that living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things.”

If we eliminate a few words from this definition we are left with a statement that I don’t disagree with:

Summary: Utilizing a classic neural network, researchers have created a new artificial intelligence model based on recent biological findings that shows improved memory performance.

Source: OIST

Computer models are an important tool for studying how the brain makes and stores memories and other types of complex information. But creating such models is a tricky business. Somehow, a symphony of signals – both biochemical and electrical – and a tangle of connections between neurons and other cell types creates the hardware for memories to take hold. Yet because neuroscientists don’t fully understand the underlying biology of the brain, encoding the process into a computer model in order to study it further has been a challenge.

Summary: Bumblebees are able to learn to solve puzzles by watching more experienced bees complete a task. This new behavioral preference then spreads throughout the entire colony. The bees that learned from others became more adept and began to prefer the learned solution over alternatives.

Source: PLOS

Bumblebees learn to solve a puzzle by watching more experienced bees, and this behavioral preference then spreads through the colony, according to a study published March 7th in the open access journal PLOS Biology by Alice Dorothy Bridges and colleagues at Queen Mary University of London, UK.