Toggle light / dark theme

Michael Levin: The New Era of Cognitive Biorobotics | Robinson’s Podcast #187

Patreon: https://bit.ly/3v8OhY7

Michael Levin is a Distinguished Professor in the Biology Department at Tufts University, where he holds the Vannevar Bush endowed Chair, and he is also associate faculty at the Wyss Institute at Harvard University. Michael and the Levin Lab work at the intersection of biology, artificial life, bioengineering, synthetic morphology, and cognitive science. Michael also appeared on the show in episode #151, which was all about synthetic life and collective intelligence. In this episode, Michael and Robinson discuss the nature of cognition, working with Daniel Dennett, how cognition can be realized by different structures and materials, how to define robots, a new class of robot called the Anthrobot, and whether or not we have moral obligations to biological robots.

The Levin Lab: https://drmichaellevin.org/

OUTLINE
00:00 Introduction.
02:14 What is Cognition?
08:01 On Working with Daniel Dennett.
13:17 Gatekeeping in Cognitive Science.
25:15 The Multi-Realizability of Cognition.
31:30 What are Anthrobots?
39:33 What Are Robots, Really?
59:53 Do We Have Moral Obligations to Biological Robots?

Robinson’s Website: ⁠http://robinsonerhardt.com

Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between.

Quantum-Neural Hybrid Solves Impossible Math

The worlds of quantum mechanics and neural networks have collided in a new system that’s setting benchmarks for solving previously intractable optimization problems. A multi-university team led by Shantanu Chakrabartty at Washington University in St. Louis has introduced NeuroSA, a neuromorphic architecture that leverages quantum tunneling mechanisms to reliably discover optimal solutions to complex mathematical puzzles.

Published March 31 in Nature Communications, NeuroSA represents a significant leap forward in optimization technology with immediate applications ranging from logistics to drug development. While typical neural systems often get trapped in suboptimal solutions, NeuroSA offers something remarkable: a mathematical guarantee of finding the absolute best answer if given sufficient time.

“We’re looking for ways to solve problems better than computers modeled on human learning have done before,” said Chakrabartty, the Clifford W. Murphy Professor and vice dean for research at WashU. “NeuroSA is designed to solve the ‘discovery’ problem, the hardest problem in machine learning, where the goal is to discover new and unknown solutions.”

The Future of Artificial Intelligence in Sports

If you’re wondering how artificial intelligence may begin to interact with our world on a more personal level, look no further than the landscape of sports. As the technology of machine learning becomes more mature and the need for human officiating becomes less necessary, sports leagues have found creative ways to integrate the concept of “computer referees” in ways we may not have initially expected.

Tennis, for example, has been a leading figure in adopting AI officiating. The Hawk-Eye System, introduced in the early 2000s, first changed tennis officiating by allowing players to challenge calls made by line judges. Hawk-Eye, which used multiple cameras and real-time 3D analysis to determine whether a ball was in or out, has today developed into a system called Electronic Line Calling Live, known as ELC. The new technology has become so reliable that the ATP plans to phase out line judges in professional tournaments by the summer of this year.

The Australian Open has taken this system a step further by testing AI to detect foot-faults. Utilizing skeletal tracking technology, the system monitors player movements to identify infractions, improving match accuracy and reducing human error. However, a glitch in the technology did make for a funny moment during this past year’s Australian Open when the computer speaker repeated “foot-fault” before German player Dominik Koepfer could even begin his serve.

AI identifies PHGDH as amyloid pathology driver in Alzheimer’s disease

Insomnia, depression, and anxiety are the most common mental disorders. Treatments are often only moderately effective, with many people experiencing returning symptoms. This is why it is crucial to find new leads for treatments. Notably, these disorders overlap a lot, often occurring together. Could there be a shared brain mechanism behind this phenomenon?

Siemon de Lange, Elleke Tissink, and Eus van Someren, together with their colleagues from the Vrije Universiteit Amsterdam, investigated brain scans of more than 40,000 participants from the UK Biobank. The research is published in the journal Nature Mental Health.

Tissink says, “In our lab, we explore the similarities and differences between , anxiety, and depression. Everyone looks at this from a : some mainly look at genetics and in this study, we look at brain scans. What aspects are shared between the disorders, and what is unique to each one?”

The barrier of meaning

In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter. Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero. The test depends on human minds to assess whether there is more there than a thin facade. But what exactly is being assessed?

I just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground. Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.

But do these systems have any understanding of the actual subject matter they’re dealing with? Or do they have what Daniel Dennett calls “competence without comprehension”?

AI-powered electronic nose detects diverse scents for health care and environmental applications

A research team has developed a “next-generation AI electronic nose” capable of distinguishing scents like the human olfactory system does and analyzing them using artificial intelligence. This technology converts scent molecules into electrical signals and trains AI models on their unique patterns. It holds great promise for applications in personalized health care, the cosmetics industry, and environmental monitoring.

The study is published in the journal ACS Nano. The team was led by Professor Hyuk-jun Kwon of the Department of Electrical Engineering and Computer Science at DGIST, with integrated master’s and Ph.D. student Hyungtae Lim as first author.

While conventional electronic noses (e-noses) have already been deployed in areas such as and gas detection in industrial settings, they struggle to distinguish subtle differences between similar smells or analyze complex scent compositions. For instance, distinguishing among floral perfumes with similar notes or detecting the faint odor of fruit approaching spoilage remains challenging for current systems. This gap has driven demand for next-generation e-nose technologies with greater precision, sensitivity, and adaptability.

How researchers discovered specific brain cells that enable intelligent behavior

For decades, neuroscientists have developed mathematical frameworks to explain how brain activity drives behavior in predictable, repetitive scenarios, such as while playing a game. These algorithms have not only described brain cell activity with remarkable precision but also helped develop artificial intelligence with superhuman achievements in specific tasks, such as playing Atari or Go.

Yet these frameworks fall short of capturing the essence of human and animal behavior: our extraordinary ability to generalize, infer and adapt. Our study, published in Nature late last year, provides insights into how in mice enable this more complex, intelligent behavior.

Unlike machines, humans and animals can flexibly navigate new challenges. Every day, we solve new problems by generalizing from our knowledge or drawing from our experiences. We cook new recipes, meet new people, take a new path—and we can imagine the aftermath of entirely novel choices.

World First: Engineers Train AI at Lightspeed

Breakthrough light-powered chip speeds up AI training and reduces energy consumption.

Engineers at Penn have developed the first programmable chip capable of training nonlinear neural networks using light—a major breakthrough that could significantly accelerate AI training, lower energy consumption, and potentially lead to fully light-powered computing systems.

Unlike conventional AI chips that rely on electricity, this new chip is photonic, meaning it performs calculations using beams of light. Published in Nature Photonics.

Making AI models more trustworthy for high-stakes contexts, like classifying diseases in medical images

Antimicrobial resistance (AMR) presents a serious challenge in today’s world. The use of antimicrobials (AMU) significantly contributes to the emergence and spread of resistant bacteria. Companion animals gain recognition as potential reservoirs and vectors for transmitting resistant microorganisms to both humans and other animals. The full extent of this transmission remains unclear, which is particularly concerning given the substantial and growing number of households with companion animals. This situation highlights critical knowledge gaps in our understanding of risk factors and transmission pathways for AMR transfer between companion animals and humans. Moreover, there’s a significant lack of information regarding AMU in everyday veterinary practices for companion animals. The exploration and development of alternative therapeutic approaches to antimicrobial treatments of companion animals also represents a research priority. To address these pressing issues, this Reprint aims to compile and disseminate crucial additional knowledge. It serves as a platform for relevant research studies and reviews, shedding light on the complex interplay between AMU, AMR, and the role of companion animals in this global health challenge. This Reprint is especially addressed to companion animal veterinary practitioners as well as all researchers working on the field of AMR in both animals and humans, from a One Health perspective.