A joint research team between the Center for Quantum Information and Quantum Biology (QIQB) at The University of Osaka and Fixstars Corporation has demonstrated one of the world’s largest classical simulations of iterative quantum phase estimation (IQPE) circuits for quantum chemistry on up to 1,024 GPUs, surpassing the previous 40-qubit limit. The result expands the scale of molecular systems available for the development and validation of quantum algorithms for future fault-tolerant quantum computers, supporting progress toward industrial applications in drug discovery and materials development.
The paper was presented at NVIDIA GTC 2026, held in San Jose, California, March 16–19, 2026.
Overcoming unresolved challenges in drug discovery and developing new materials to address climate change will require advanced quantum chemical calculations beyond the reach of current technology. Against this backdrop, fault-tolerant quantum computers (FTQC) are widely anticipated as a key enabling technology, making it increasingly important to develop and validate, ahead of their deployment, the quantum algorithms that will eventually run on such systems.
Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond—serving up more of the same viewpoints, delivered with growing confidence and certainty.
That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.
But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice—one that could be softened with a surprisingly modest change: introducing more randomness into what people see.
Hidden features uncovered in X-ray signals are set to overturn a key scientific theory and fundamentally change how X-rays are interpreted across fields of physics, chemistry, biology and materials science, new research reveals. Researchers say the discovery can help scientists measure X-rays more precisely and reliably, and improve our understanding of common materials, from battery materials to biological proteins.
X-ray science focuses on the unique energy signatures of atoms. These include the specific X-rays emitted when electrons transition into inner shells—the strongest of which are known as K-alpha lines—as well as distinct energy thresholds at which atoms begin to strongly absorb X-rays.
For more than 50 years, the entire field has relied on the assumption that a core parameter in the equation used to model X-ray absorption spectra, known as the standard XAFS equation, is fixed and does not change.
Dr. discusses one of the most provocative frontiers in technology: the automation of moral judgement — in his talk focusses on outcomes of a comparative moral Turing test (AI outperforms humans across a range of metrics), as well as AI assisted medical triage!
Link in reply🔗
Eyal Aharoni
Dr. Eyal Aharoni (Georgia State University) to the Future Day 2026 stage to discuss one of the most provocative frontiers in technology: the automation of moral judgement.
Breaking the Moral Turing Test: Studies of human attribution and deference to AI moral judgment and decision-making.
The human brain constantly makes decisions. It requires minimal power to move bodies in a desired direction or avoid an object. A Purdue University engineer uses the brain’s efficiency as inspiration to help autonomous vehicles, such as drones and robots, make crucial, time-sensitive decisions while operating in the field.
Kaushik Roy, the Edward G. Tiedemann, Jr. Distinguished Professor of Electrical and Computer Engineering in Purdue’s Elmore Family School of Electrical and Computer Engineering and director of the Institute of Chips and AI, is developing brain-inspired hardware that enables autonomous devices to efficiently navigate and adapt to their environment. This work is published in Communications Engineering
AI-powered machines have advanced significantly over the past several decades thanks to machine learning, which enables these devices to recognize patterns and make predictions or decisions. But the algorithms that facilitate this learning require immense amounts of energy to operate due to their intensive calculations and the design of the hardware that runs them.
Check out Brilliant at https://brilliant.org/TheOverviewEffekt/. You can sign up for free and with that link and get a 20% discount on the annual premium subscription.
I do the relativistic math behind Project Hail Mary — time dilation, mass ratios, coast phases, and the relativsitic rocket equation with astrophage. How long would it take to reach Alpha Centauri, Tau Ceti, Betelgeuse, Andromeda, and the edge of the observable universe under constant 1.5G acceleration? We also look at Andy Weir’s mass ratio mistake, the astrophage infection range problem, and visualize the spread using the AT-HYG stellar catalog. Includes a interactive relativistic travel calculator on my website.
Massachusetts Institute of Technology (MIT) engineers have developed an ultrasound wristband that precisely tracks hand movements in real-time for robotics and virtual reality control.
The next time you’re scrolling your phone, take a moment to appreciate the feat: The seemingly mundane act is possible thanks to the coordination of 34 muscles, 27 joints, and over 100 tendons and ligaments in your hand. Indeed, our hands are the most nimble parts of our bodies. Mimicking their many nuanced gestures has been a longstanding challenge in robotics and virtual reality.
Now, MIT engineers have designed an ultrasound wristband that precisely tracks a wearer’s hand movements in real-time. The wristband produces ultrasound images of the wrist’s muscles, tendons, and ligaments as the hand moves, and is paired with an artificial intelligence algorithm that continuously translates the images into the corresponding positions of the five fingers and palm.
Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!
0:00 Intro. 0:37 What is consciousness? Phenomenology — functionalism & panpsychism. 1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity. 3:20 Minds are not states — they are processes. We don’t see causal filtering in tables. 5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism. 9:49 Methodological humility about armchair philosophy of mind. 12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat. 16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well? 22:35 Why stepping outside yourself is powerful — seeing. 25:12 Are AIs born enlightened? 26:25 Are LLMs AGI yet? What’s still missing. 28:16 AI, hybrid minds, and the limits of human augmentation. 32:32 Can minds be extended — in humans, dogs, and cats? 36:19 Why human language may not be open-ended enough. 39:41 Why AI is so data-hungry — and why better algorithms must exist. 43:39 Why better representations matter more than raw compute (grokking was surprising) 48:46 How babies build a world model from touch and perception. 51:05 What comes after copilots: agent teams, multimodality and new AI workflows. 55:32 Can AI help us discover new forms of taste and aesthetics. 59:49 Using AI to learn art history and invent a transhumanist aesthetic. 1:01:47 When AI helps everyone looks professional, what still counts as real skill? 1:03:56 What happens when the self starts to merge with AI 1:05:43 How AI changes the way we think and create. 1:08:10 What happens when AI starts shaping human relationships. 1:11:18 Why feeling in control can matter more than being right. 1:12:58 Why intelligence without wisdom is very dangerous. 1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation? 1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere. 1:24:02 10 years to the singularity? 1:25:27 AI, coordination and the corruption problem. 1:29:47 Can AI become more moral than us (humans)? and if so, should it? 1:34:31 Why pluralism still leaves moral collisions unresolved. 1:34:31 Traversing the landscape of norms (value) 1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view) 1:43:08 Moral realism, evolution & game-theoretic symmetries. 1:48:01 Is there a global optimum of moral coordination? Is that god? 1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan. 1:59:36 Will superintelligences converge into a cosmic singleton?
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P…