By designing proteins de novo scientists can more efficiently create novel therapeutics tailored for specific biological functions.

Deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are designed to partly emulate the functioning and structure of biological neural networks. As a result, in addition to tackling various real-world computational problems, they could help neuroscientists and psychologists to better understand the underpinnings of specific sensory or cognitive processes.
Researchers at Osnabrück University, Freie Universität Berlin and other institutes recently developed a new class of artificial neural networks (ANNs) that could mimic the human visual system better than CNNs and other existing deep learning algorithms. Their newly proposed, visual system-inspired computational techniques, dubbed all-topographic neural networks (All-TNNs), are introduced in a paper published in Nature Human Behaviour.
“Previously, the most powerful models for understanding how the brain processes visual information were derived off of AI vision models,” Dr. Tim Kietzmann, senior author of the paper, told Tech Xplore.
Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics?
In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant.
Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics.
The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy.
One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.
What we discuss:
00:00 Intro.
07:45 The history of scientific models of reality: structural, mathematical and computational.
14:40 Late 2010’s: a shift to computational models of systems.
20:20 The Principle of Computational Equivalence (PCE)
24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance.
27:50 The importance of the passage of time to Consciousness.
28:45 Irreducibility and the limits of science.
33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility.
42:20 Observer Theory and the Wolfram Physics Project.
45:30 Modelling the relations between discrete units of Space: Hypergraphs.
47:30 The progress of time is the computational process that is updating the network of relations.
50:30 We ’make’ space.
51:30 Branchial Space — different quantum histories of the world, branching and merging.
54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements.
56:30 Branchial Space VS Many Worlds interpretation.
58:50 Rulial Space: All possible rules of all possible interconnected branches.
01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible.
01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather.
01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory.
01:20:30 Entanglement explained — common ancestors in branchial space.
01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time.
01:37:30 At the molecular level the laws of physics are reversible.
01:40:30 What looks random to us in entropy is actually full of the data.
01:45:30 Entropy defined in computational terms.
01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence.
01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology.
01:55:30 Reductionism in an irreducible world: saying a lot from very little input.
References:
“The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.
“A New Kind of Science”, Stephen Wolfram.
Observer Theory article, Stephen Wolfram.
Early brain development is a biological black box. While scientists have devised multiple ways to record electrical signals in adult brains, these techniques don’t work for embryos.
A team at Harvard has now managed to peek into the box—at least when it comes to amphibians and rodents. They developed an electrical array using a flexible, tofu-like material that seamlessly embeds into the early developing brain. As the brain grows, the implant stretches and shifts, continuously recording individual neurons without harming the embryo.
“There is just no ability currently to measure neural activity during early neural development. Our technology will really enable an uncharted area,” said study author Jia Liu in a press release.
A research team led by Professor Huang Xingjiu at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has developed a highly stable adaptive integrated interface for ion sensing. The study was published as an inside front cover article in Advanced Materials.
All-solid-state ion-selective electrode serves as a fundamental component in the ion sensing of intelligent biological and chemical sensors. While the researchers had previously developed several transducer materials with a sandwich-type interface to detect common ions, the performance of such sensors was often limited by interface material and structure.
To overcome these challenges, the team introduced a novel interface using lipophilic molybdenum disulfide (MoS₂) regulated by cetyltrimethylammonium (CTA⁺). This structure enables spatiotemporal adaptive integration—assembling single-piece sensing layers atop efficient transduction layers.
Imagine you’re sitting at a pond, listening to the din of croaking frogs. You want to know how many frogs are in the pond, but you can’t pick out the individual croaks—only the combined sound rising and falling in volume as frogs start and stop communicating.
But what if you were able to examine these volume changes to figure out how many frogs are in the pond?
That’s the idea behind a new method developed by the Funke Lab at Janelia to count the individual molecules contained in a single spot of light detected by a fluorescence microscope—a quantity important for understanding the underlying biology of a living system. The paper is published in the journal Nano Letters.
Please consider joining my Substack at https://rupertsheldrake.substack.com.
Does Nature Obey Laws? | Sheldrake-Vernon Dialogue 95.
The conviction that the natural world is obedient, adhering to laws, is a widespread assumption of modern science. But where did this idea originate and what beliefs does it imply? In this episode of the Sheldrake-Vernon Dialogues, Rupert Sheldrake and Mark Vernon discuss the impact on science of the Elizabethan lawyer, Francis Bacon. His New Instrument of Thought, or Novum Organum, put laws at the centre of science and was intended as an upgrade on assumptions developed by Aristotle. But does the existence of mind-like laws of nature, somehow acting on otherwise mindless matter, even make sense? What difference is made by insights subsequent to Baconian philosophy, such as the discovery of evolution or the sense that the natural world is not machine-like but behaves like an organism? Could the laws of nature be more like habits? And what about the existence of miracles, the purposes of organisms, and the extraordinary fecundity of creativity?
—
Dr Rupert Sheldrake, PhD, is a biologist and author best known for his hypothesis of morphic resonance. At Cambridge University, as a Fellow of Clare College, he was Director of Studies in biochemistry and cell biology. As the Rosenheim Research Fellow of the Royal Society, he carried out research on the development of plants and the ageing of cells, and together with Philip Rubery discovered the mechanism of polar auxin transport. In India, he was Principal Plant Physiologist at the International Crops Research Institute for the Semi-Arid Tropics, where he helped develop new cropping systems now widely used by farmers. He is the author of more than 100 papers in peer-reviewed journals and his research contributions have been widely recognized by the academic community, earning him a notable h-index for numerous citations. On ResearchGate his Research Interest Score puts him among the top 4% of scientists.
https://www.sheldrake.org/about-rupert-sheldrake?svd=95
—