Toggle light / dark theme

UMass Engineers Create First Artificial Neurons That Could Directly Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

First Artificial Neurons That Might Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

Lab-Grown Brains Powers the World’s First Bio-Computer 🧠

Discover the world’s first computer powered by human brain cells! In this groundbreaking video, we dive into the revolutionary Neuroplatform by FinalSpark, merging biology with technology. Witness how biocomputing is transforming the future of artificial intelligence and computing with unparalleled energy efficiency and processing power. Subscribe now to stay updated on cutting-edge tech that blurs the lines between science fiction and reality! 🧠💻

#Biocomputing #Neuroplatform #AI #FutureTech #Innovation #FinalSpark #BrainPoweredComputer.

Stay ahead in 2024: Unlock the future of AI, tech, and innovative businesses! 🚀 Subscribe for the latest insights and be part of a community that learns, grows, and leads the way in a tech-driven world. Discover, ignite and soar into tomorrow with Quantum Spark.

View a PDF of the paper titled Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play, by Qinsi Wang and 8 other authors

Although reinforcement learning (RL) can effectively enhance the reasoning capabilities of vision-language models (VLMs), current methods remain heavily dependent on labor-intensive datasets that require extensive manual construction and verification, leading to extremely high training costs and consequently constraining the practical deployment of VLMs. To address this challenge, we propose Vision-Zero, a domain-agnostic framework enabling VLM self-improvement through competitive visual games generated from arbitrary image pairs. Specifically, Vision-Zero encompasses three main attributes: Strategic Self-Play Framework: Vision-Zero trains VLMs in “Who Is the Spy”-style games, where the models engage in strategic reasoning and actions across multiple roles. Through interactive gameplay, models autonomously generate their training data without human annotation. Gameplay from Arbitrary Images: Unlike existing gamified frameworks, Vision-Zero can generate games from arbitrary images, thereby enhancing the model’s reasoning ability across diverse domains and showing strong generalization to different tasks. We demonstrate this versatility using three distinct types of image datasets: CLEVR-based synthetic scenes, charts, and real-world images. Sustainable Performance Gain: We introduce Iterative Self-Play Policy Optimization (Iterative-SPO), a novel training algorithm that alternates between Self-Play and reinforcement learning with verifiable rewards (RLVR), mitigating the performance plateau often seen in self-play-only training and achieving sustained long-term improvements. Despite using label-free data, Vision-Zero achieves state-of-the-art performance on reasoning, chart question answering, and vision-centric understanding tasks, surpassing other annotation-based methods. Models and code has been released at https://github.com/wangqinsi1/Vision-Zero.

AI-generated nanomaterial images fool even experts, study shows

Black-and-white images of pom-pom–like clusters, semi-translucent fields of tiny dark gray stars on a pale background, and countless other abstract patterns are a familiar sight in scientific papers describing the shapes and properties of newly engineered materials.

So, when research images show particles that resemble puffed popcorn or perfectly smooth “Tic Tacs,” it might not trigger our AI suspicion radar, but researchers in a recent study caution otherwise.

Microscopy images are indispensable in nanomaterials science, as they reveal the hidden intricacies and fascinating shapes that tiny particles assume, which appear to be a pile of dust to the naked eye.

AI techniques excel at solving complex equations in physics, especially inverse problems

Differential equations are fundamental tools in physics: they are used to describe phenomena ranging from fluid dynamics to general relativity. But when these equations become stiff (i.e. they involve very different scales or highly sensitive parameters), they become extremely difficult to solve. This is especially relevant in inverse problems, where scientists try to deduce unknown physical laws from observed data.

To tackle this challenge, the researchers have enhanced the capabilities of Physics-Informed Neural Networks (PINNs), a type of artificial intelligence that incorporates physical laws into its .

Their approach, reported in Communications Physics, combines two innovative techniques: Multi-Head (MH) training, which allows the neural network to learn a general space of solutions for a family of equations—rather than just one specific case—and Unimodular Regularization (UR), inspired by concepts from differential geometry and , which stabilizes the learning process and improves the network’s ability to generalize to new, more difficult problems.

/* */