Toggle light / dark theme

Shortly after Max Planck shook the scientific world with ideas about the fundamental quantization of energy, researchers built and leveraged theories of quantum mechanics to resolve physical phenomena that had previously been unexplainable, including the behavior of heat in solids and light absorption on an atomic level. In the 120-plus years since, researchers have looked beyond physics and used quantum theory’s same perplexing — even “spooky,” according to Einstein — laws to solve inexplicable phenomena in a variety of other disciplines.

Today, researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, are applying quantum mechanics to biology to better understand of one of nature’s biggest mysteries — magnetosensitivity, an organism’s ability to sense Earth’s magnetic field and use it as a tool to adjust some biological processes. And they’ve found some surprising results.

In a recent study, APL research engineer and scientist Carlos Martino and his APL colleagues Nam Le, Michael Salerno, Janna Domenico, Christopher Stiles, Megan Hannegan, and Ryan McQuillen, along with Ilia Solov’yov from the Carl von Ossietzky University of Oldenburg in Germany, found that an enzyme that plays a central role in human metabolism has some of the same key features as a magnetically sensitive protein found in birds.

A group of molecular and chemical biologists at the University of California, San Diego, has found possible evidence of interdomain horizontal gene transfer leading to the development of the eye in vertebrates. In their study, reported in Proceedings of the National Academy of Sciences, Chinmay Kalluraya, Alexander Weitzel, Brian Tsu and Matthew Daugherty used the IQ-TREE software program to trace the evolutionary history of genes associated with vision.

Ever since scientists proved that humans, along with other animals, developed due to , one problem has stood out—how could evolution possibly account for the development of something as complicated as the eyeball? Even Charles Darwin was said to be stumped by the question. In recent times, this seeming conundrum has been used by some groups as a means to discredit altogether. In this new effort, the team in California sought to answer the question once and for all.

Their work began with the idea that vision in vertebrates may have got its start by using light-sensitive genes transferred from microbes. To find out if that might be the case, the team submitted likely human gene candidates to the IQ-TREE program to look for similar genetic sequences in other creatures, most specifically, microbes.

The Oxford Martin Programme on Bio-Inspired Technologies is investigating the possibility of making computers real.

We aim to develop a completely new methodology for overcoming the extreme fragility of memory. By learning how biological molecules shield fragile states from the environment, we hope to create the building blocks of future computers.

The unique power of computers comes from their ability to carry out all possible calculations in parallel.

Enzyme-catalyzed replication of nucleic acid sequences is a prerequisite for the survival and evolution of biological entities. Before the advent of protein synthesis, genetic information was most likely stored in and replicated by RNA. However, experimental systems for sustained RNA-dependent RNA-replication are difficult to realise, in part due to the high thermodynamic stability of duplex products and the low chemical stability of catalytic RNAs. Using a derivative of a group I intron as a model for an RNA replicase, we show that heated air-water interfaces that are exposed to a plausible CO2-rich atmosphere enable sense and antisense RNA replication as well as template-dependent synthesis and catalysis of a functional ribozyme in a one-pot reaction. Both reactions are driven by autonomous oscillations in salt concentrations and pH, resulting from precipitation of acidified dew droplets, which transiently destabilise RNA duplexes. Our results suggest that an abundant Hadean microenvironment may have promoted both replication and synthesis of functional RNAs.

© 2023. The Author(s).

Conflict of interest statement.

The recent success of machine learning (ML) methods in answering similar questions in human languages (Natural Language Processing or NLP) is related to the availability of large-scale datasets. The effort of creating a biological dataset in a format, level of detail, scale, and time span amenable to ML-based analysis is capital intensive and necessitates a multidisciplinary expertise to develop, deploy, and maintain specialized hardware to collect acoustic and behavioral signals, as well as software to process and analyze them, develop linguistic models that reveal the structure of animal communication and ground it in behavior, and finally perform playback experiments to attempt bidirectional communication for validation ( Figure 1 ). Yet, the deployment of graphics processing unit’s (GPU) is following a trajectory akin to Moore’s Law ( https://openai.com/blog/ai-and-compute) and, at the same time, the success of such an endeavor could potentially yield cross-applications and advancements in broader communities investigating non-human communication and animal behavioral research. One of the main drivers of progress making deep learning successful has been the availability of large (both labeled and unlabeled) datasets (and of architectures capable of taking advantage of such large data). To build a more complete picture and capture the full range of a species’ behavior, collecting datasets containing measurements across a broad set of factors is essential. In turn, setting up infrastructure that allows for the collection of broad and sizable datasets would facilitate studies that allow the autonomous discovery of the meaning-carrying units of communication.

A dedicated interdisciplinary initiative toward a detailed understanding of animal communication could arguably be made with a number of species as its focus. Birds, primates, and marine mammals have all given insight into the capacity of animal communication. In some ways, the collective understanding of the capacity for and faculty of communication in non-humans has been built through experimentation and observation across a wide number of taxa ( Fitch, 2005 ; Hauser et al., 2002). The findings on both the underlying neurobiological systems underpinning communicative capacity, and the complexity and diversity of the communication system itself often mirror our ability with which to work with a given species, or the existence of prominent long-term field research programs.

Animal communication researchers have conducted extensive studies of various species, including spiders (e.g. Elias et al., 2012 ; Hebets et al., 2013), pollinators (e.g Kulahci et al., 2008), rodents (e.g Ackers and Slobodchikoff, 1999 ; Slobodchikoff et al., 2009), birds (e.g Baker, 2001 ; Griesser et al., 2018), primates (e.g. Clarke et al., 2006 ; Jones and Van Cantfort, 2007 ; Leavens, 2007 ; Ouattara et al., 2009 ; Schlenker et al., 2016 ; Seyfarth et al., 1980), and cetaceans (e.g Janik, 2014 ; Janik and Sayigh, 2013), showing that animal communication involves diverse strategies, functions, and hierarchical components, and encompasses multiple modalities. Previous research efforts often focused on the mechanistic, computational, and structural aspects of animal communication systems. In human care, there have been several successful attempts of establishing a dialogue with birds (e.g.

Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.

So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let’s consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.

Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.