Archive for the ‘biological’ category

Mar 24, 2024

New technique converts excess renewable energy to natural gas

Posted by in categories: biological, chemistry, sustainability

Four Lawrence Livermore National Laboratory (LLNL) researchers have partnered with Los Angeles-based SoCalGas and Munich, Germany-based Electrochaea to develop an electrobioreactor to allow excess renewable electricity from wind and solar sources to be stored in chemical bonds as renewable natural gas.

When renewable electricity supply exceeds demand, electric-utility operators intentionally curtail production of renewable electricity to avoid overloading the grid. In 2020, in California, more than 1.5 million megawatt hours of renewable electricity were curtailed, enough to power more than 100,000 households for a full year.

This practice also occurs in other countries. The team’s electrobioreactor uses the renewable electricity to convert water into hydrogen and oxygen. The microbes then use the hydrogen to convert carbon dioxide into methane, which is a major component of natural gas. Methane can then be moved around in natural gas pipelines and can be stored indefinitely, allowing the renewable energy to be recovered when it is most needed.

Mar 24, 2024

Emerging Artificial Neuron Devices for Probabilistic Computing

Posted by in categories: biological, finance, information science, robotics/AI

Probabilistic computing with stochastic devices.

In recent decades, artificial intelligence has been successively employed in the fields of finance, commerce, and other industries. However, imitating high-level brain functions, such as imagination and inference, pose several challenges as they are relevant to a particular type of noise in a biological neuron network. Probabilistic computing algorithms based on restricted Boltzmann machine and Bayesian inference that use silicon electronics have progressed significantly in terms of mimicking probabilistic inference. However, the quasi-random noise generated from additional circuits or algorithms presents a major challenge for silicon electronics to realize the true stochasticity of biological neuron systems. Artificial neurons based on emerging devices, such as memristors and ferroelectric field-effect transistors with inherent stochasticity can produce uncertain non-linear output spikes, which may be the key to make machine learning closer to the human brain. In this article, we present a comprehensive review of the recent advances in the emerging stochastic artificial neurons (SANs) in terms of probabilistic computing. We briefly introduce the biological neurons, neuron models, and silicon neurons before presenting the detailed working mechanisms of various SANs. Finally, the merits and demerits of silicon-based and emerging neurons are discussed, and the outlook for SANs is presented.

Keywords: brain-inspired computing, artificial neurons, stochastic neurons, memristive devices, stochastic electronics.

Continue reading “Emerging Artificial Neuron Devices for Probabilistic Computing” »

Mar 24, 2024

Neuromorphic Chips: The Next Big Thing in Deep Tech

Posted by in categories: biological, robotics/AI

Neuromorphic computing is an emerging solution for companies specializing in small, energy-efficient edge computing devices and robotics, striving to improve their products. There has been a paradigm shift in computing since the advent of neuromorphic chips. With the potential to unlock new levels of processing speed, energy efficiency, and adaptability, neuromorphic chips are here to stay. Industries from robotics to healthcare are exploring the potential of neuromorphic chips in various applications.

What is Neuromorphic Computing?

Neuromorphic computing is a field within computer science and engineering that draws inspiration from the structure and operation of the human brain. Its goal is to create computational systems, including custom hardware replicating the neural networks and synapses in biological brains. These custom computational systems are commonly known as neuromorphic chips or neuromorphic hardware.

Mar 23, 2024

Changes in Protein Folding Can Drive Evolution

Posted by in categories: biological, evolution

In cells, like the snowflake yeast in this image byTony Burnetti, proteins are translated and folded into very specific, three-dimensional shapes. | Cell And Molecular Biology.

Mar 21, 2024

Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks

Posted by in categories: biological, neuroscience

How do multiple synapses interact to modulate learning? Agnes and Vogels postulate models of ‘co-dependent’ synaptic plasticity that promote rapid, multi-synaptic attainment of stable receptive fields, dendritic patterns and plausible neural dynamics.

Mar 21, 2024

Bioengineering edible mycelium to enhance nutritional value, color, and flavor

Posted by in categories: bioengineering, biological

In a recent study published in Nature Communications, researchers developed a modular synthetic biology toolkit for Aspergillus oryzae, an edible fungus used in fermented foods, protein production, and meat alternatives.

Study: Edible mycelium bioengineered for enhanced nutritional value and sensory appeal using a modular synthetic biology toolkit. Image Credit: Rattiya Thongdumhyu/

Mar 20, 2024

19.5y Younger Biological Age: Supplements, Diet (Test #2 in 2024)

Posted by in categories: biological, genetics, life extension

Join us on Patreon! Links: Epigenetic, Telomere Testing:

Mar 19, 2024

New study uncovers how hydrogen provided energy at life’s origin

Posted by in categories: biological, chemistry, sustainability

Hydrogen gas is a clean fuel. It burns with oxygen in the air to provide energy with no CO2. Hydrogen is a key to sustainable energy for the future. Though humans are just now coming to realize the benefits of hydrogen gas (H2 in chemical shorthand), microbes have known that H2 is a good fuel for as long as there has been life on Earth. Hydrogen is ancient energy.

Mar 19, 2024

Omnidirectional tripedal robot scoots, shuffles and climbs

Posted by in categories: biological, robotics/AI

A small research group from the University of Michigan has developed a three-legged skating/shuffling robot called SKOOTR that rolls as it walks, can move along in any direction and can even rise up to overcome obstacles.

The idea for the SKOOTR – or SKating, Omni-Oriented, Tripedal Robot – project came from assistant professor Talia Y. Moore at the University of Michigan’s Evolution and Motion of Biology and Robotics (EMBiR) Lab.

Continue reading “Omnidirectional tripedal robot scoots, shuffles and climbs” »

Mar 19, 2024

Natural language instructions induce compositional generalization in networks of neurons

Posted by in categories: biological, robotics/AI

In this study, we use the latest advances in natural language processing to build tractable models of the ability to interpret instructions to guide actions in novel settings and the ability to produce a description of a task once it has been learned. RNNs can learn to perform a set of psychophysical tasks simultaneously using a pretrained language transformer to embed a natural language instruction for the current task. Our best-performing models can leverage these embeddings to perform a brand-new model with an average performance of 83% correct. Instructed models that generalize performance do so by leveraging the shared compositional structure of instruction embeddings and task representations, such that an inference about the relations between practiced and novel instructions leads to a good inference about what sensorimotor transformation is required for the unseen task. Finally, we show a network can invert this information and provide a linguistic description for a task based only on the sensorimotor contingency it observes.

Our models make several predictions for what neural representations to expect in brain areas that integrate linguistic information in order to exert control over sensorimotor areas. Firstly, the CCGP analysis of our model hierarchy suggests that when humans must generalize across (or switch between) a set of related tasks based on instructions, the neural geometry observed among sensorimotor mappings should also be present in semantic representations of instructions. This prediction is well grounded in the existing experimental literature where multiple studies have observed the type of abstract structure we find in our sensorimotor-RNNs also exists in sensorimotor areas of biological brains3,36,37. Our models theorize that the emergence of an equivalent task-related structure in language areas is essential to instructed action in humans. One intriguing candidate for an area that may support such representations is the language selective subregion of the left inferior frontal gyrus. This area is sensitive to both lexico-semantic and syntactic aspects of sentence comprehension, is implicated in tasks that require semantic control and lies anatomically adjacent to another functional subregion of the left inferior frontal gyrus, which is implicated in flexible cognition38,39,40,41. We also predict that individual units involved in implementing sensorimotor mappings should modulate their tuning properties on a trial-by-trial basis according to the semantics of the input instructions, and that failure to modulate tuning in the expected way should lead to poor generalization. This prediction may be especially useful to interpret multiunit recordings in humans. Finally, given that grounding linguistic knowledge in the sensorimotor demands of the task set improved performance across models (Fig. 2e), we predict that during learning the highest level of the language processing hierarchy should likewise be shaped by the embodied processes that accompany linguistic inputs, for example, motor planning or affordance evaluation42.

One notable negative result of our study is the relatively poor generalization performance of GPTNET (XL), which used at least an order of magnitude more parameters than other models. This is particularly striking given that activity in these models is predictive of many behavioral and neural signatures of human language processing10,11. Given this, future imaging studies may be guided by the representations in both autoregressive models and our best-performing models to delineate a full gradient of brain areas involved in each stage of instruction following, from low-level next-word prediction to higher-level structured-sentence representations to the sensorimotor control that language informs.

Page 1 of 19712345678Last