Toggle light / dark theme

Understanding synapse loss in Alzheimer’s disease has been hampered by a lack of human model systems. Here, the authors show that manipulation of physiological or pathological Aβ has differing effects on synapses in live human brain slice cultures.

The ability to quickly recognize sounds, particularly the vocalizations made by other animals, is known to contribute to the survival of a wide range of species. This ability is supported by a process known as categorical perception, which entails the transformation of continuous auditory input (e.g., gradual changes in pitch or tone) into distinct categories (i.e., vocalizations that mean something specific).

Various past studies have tried to shed light on the neural underpinnings of categorical perception and the categorization of vocalizations. While they broadly identified some that could play a part in these abilities, the precise processes through which animals categorize their peer’s categorizations have not yet been fully elucidated.

Researchers at Johns Hopkins University recently carried out a study investigating how vocalizations are represented in the brain of big brown bats, which are scientifically known as Eptesicus fuscus. Their findings, published in Nature Neuroscience, suggest that the categories of vocalizations are encoded in the bat midbrain.

In a comprehensive Genomic Press perspective article published today, researchers from Fudan University and Shanghai University of Traditional Chinese Medicine have highlighted remarkable advances in the development of positron emission tomography (PET) tracers capable of visualizing α-synuclein aggregates in the brains of patients with Parkinson’s disease and related disorders.

The abnormal accumulation of α-synuclein protein is a defining pathological feature of several neurodegenerative conditions collectively known as synucleinopathies, including Parkinson’s disease (PD), multiple system atrophy (MSA), and dementia with Lewy bodies (DLB). Until recently, confirming the presence of these protein aggregates required post-mortem examination, severely limiting early diagnosis and treatment monitoring capabilities.

“The ability to visualize these protein aggregates in living patients represents a significant leap forward in neurodegenerative disease research,” explains Dr. Fang Xie, corresponding author and researcher at the Department of Nuclear Medicine & PET Center at Huashan Hospital, Fudan University.

Researchers at the University Health Network (UHN) and the University of Toronto have developed a skin-based test that can detect signature features of progressive supranuclear palsy (PSP), a rare neurodegenerative disease that affects body movements, including walking, balance and swallowing.

The test, which the researchers describe in a recent issue of JAMA Neurology, could allow for more accurate and faster PSP diagnosis than current methods.

“This is important for assigning patients to the correct , but it will be even more important in the future as researchers develop targeted, precision treatments for PSP,” says Ivan Martinez-Valbuena, a scientific associate at the Rossy Progressive Supranuclear Palsy Centre at the UHN’s Krembil Brain Institute and U of T’s Tanz Centre for Research in Neurodegenerative Diseases.

The system was trained to decode words and turn them into speech in increments of 80 milliseconds (0.08 seconds). For comparison, people speak about three words per second, or around 130 words per minute. The system then delivered audible words using the woman’s voice, which was captured from a recording made before the stroke.

The system was able to decode the full vocabulary set at a rate of 47.5 words per minute. It could decode a simpler set of 50 words even more rapidly, at 90.9 words per minute. That’s much faster than an earlier device the researchers had developed, which decoded about 15 words per minute with a 50-word vocabulary. The new device had a more than 99% success rate in decoding and synthesizing speech in less than 80 milliseconds. It took less than a quarter of a second to translate speech-related brain activity into audible speech.

The researchers found that the system wasn’t limited to trained words or sentences. It could make out novel words and decode new sentences to produce fluent speech. The device could also produce speech indefinitely without interruption.