Toggle light / dark theme

Markov chain Monte Carlo

In statistics, Markov chain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements’ distribution approximates it – that is, the Markov chain’s equilibrium distribution matches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution.

Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too high dimensional to study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including the Metropolis–Hastings algorithm.

Quantum computers must overcome major technical hurdles before tackling quantum chemistry problems

Although the potential applications of quantum computing are widespread, a new feasibility study suggests quantum computers still face major hurdles in solving quantum chemistry problems. The study, published in Physical Review B, evaluates what criteria are needed for a quantum advantage in searching for the ground state energy of molecules. The researchers attempt this feat using two different algorithms with differing strengths and weaknesses.

The team first determined the criteria for the variational quantum eigensolver (VQE) algorithm, which is used for noisy, near-term devices and sets an upper bound to the level of imprecision or decoherence in quantum hardware. The researchers derived quantitative criteria for VQE and QPE based on error rates, energy scales, and overlap with the ground state.

Results showed that VQE is extremely sensitive to hardware errors and decoherence. The team says that achieving chemical accuracy would require error rates far below current hardware capabilities. Available error mitigation techniques offer only limited improvement and scale poorly with system size.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

The deep mystery physicists call “the problem of time” | Jim Al-Khalili: Full Interview

Become a Big Think member to unlock expert classes, premium print issues, exclusive events and more: https://bigthink.com/membership/?utm_

Preorder Jim Al-Khalili’s forthcoming book, On Time: The Physics That Makes the Universe, here: https://www.amazon.com/Time-Physics-T?tag=lifeboatfound-20

Up next.
Brian Cox: The quantum roots of reality | Full Interview ► • Brian Cox: The quantum roots of reality |…

Time feels obvious, but physics tells a stranger story about its existence: Theoretical physicist Jim Al-Khalili explores why our sense of time may be incredibly misleading, including the idea that past, present, and future might all exist at once.

0:00 Chapter 1: Does time flow?
2:42 Why Time Feels Faster as We Age.
3:56 Time and Change in Philosophy and Physics.
5:28 Einstein and the End of Absolute Time.
6:19 Time in the Equations of Physics.
7:50 Chapter 2: How do we reconcile quantum field theory with the general theory of relativity?
12:10 Evidence for Time Dilation: Muons.
14:29 Gravity Slows Time: General Relativity.
19:22 Space-Time and the Block Universe.
21:55 Does Time Really Exist?
26:33 The Debate: Eternalism vs Presentism.
34:12 Chapter 3: Is There a “Now”?
40:40 Chapter 4: Why Does Thermodynamics Have a Direction in Time?
49:38 Quantum Entanglement and the Direction of Time.
55:10 Did Time Begin at the Big Bang?
45:00 Will Time End?
1:05:40 Chapter 5: Is Time Travel Possible?

Cool Qubits Make Faster Decisions

Classical machine learning has benefited several physics subfields, from materials science to medical imaging. Implementing machine-learning algorithms on quantum computers could expand their use to more complex problems and to datasets that are inherently quantum. Nayeli Rodríguez-Briones at the Technical University of Vienna and Daniel Park at Yonsei University in South Korea have now proposed a thermodynamics-inspired protocol that could make quantum machine-learning techniques more efficient [1].

In one common classical machine-learning task, a system is trained on a known dataset and then challenged to classify new data. Its output quantifies both the classification and that classification’s uncertainty. Once the system’s parameters are fixed, evaluating the same data yields the same output. In contrast, the output of a quantum machine-learning algorithm is read out as binary measurements of qubits, which are inherently probabilistic. Because a single measurement provides only limited information, the computation must be repeated many times.

Rodríguez-Briones and Park recognized that how clearly a quantum computer reveals its output is determined by entropy. When the readout qubit is highly polarized—strongly favoring one outcome—its entropy is low. Few repetitions are needed to obtain a firm result. An unpolarized, high-entropy readout qubit returns both states more evenly, meaning more repetitions are required. The researchers showed that the readout qubit’s polarity can be increased by transferring its entropy to ancillary qubits, effectively cooling one while warming the others. Between runs, the ancillary qubits are reset by coupling them to a heat bath. Crucially, this entropy transfer affects the readout qubit’s degree of polarization without changing the encoded decision. The upshot: A given result can be arrived at with fewer repetitions.

Seeing global trade through the lens of physics

New research from the Complexity Science Hub (CSH) shows why widely used algorithms for measuring economic complexity produce trustworthy results and how these tools may benefit diverse areas such as ecology, social science, and agentic AI. The paper is published in the journal Physical Review E.

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Joscha Bach delivers “The Machine Consciousness Hypothesis” at Future Day 2026

Can AI become conscious?

What is consciousness for? And is biological consciousness best understood as a self-organising algorithm that could, in principle, be recreated in machines?

In this talk, Joscha explores consciousness as perception of perception, coherence maintenance, modelling, resonance, self-organisation, and the possibility that machine consciousness may emerge through the right virtual architecture.

Essay: ‘The Machine Consciousness Hypothesis’ by Joscha Bach & Hikari Sorenson: https://cimc.ai/cimcHypothesis.pdf

CIMC: https://cimc.ai

Post: https://scifuture.org/joscha-bach-the… Intro

The macroecology of immunity: predominant influence of climate on invertebrate immune response

https://vist.ly/4u8bp Macroecology Odonates Parasites

The immune system is the primary defense against parasites. With the ever-increasing rate of disease, epidemiologic models considering geographic variation in immune responses could prove useful. Despite increasing interest in the macroecology of parasitism and infectious diseases, we know little about the macroecology of immune responses (i.e. macroimmunology). Host characteristics, parasite exposure, and environmental factors can all affect immunity, but how these factors shape spatial variation in the strength of immune responses remains underexplored. We captured odonates (dragonflies and damselflies) and their conspicuous ectoparasitic mites from 42 sites spread across a geographic area spanning the temperate and boreal forest biomes in eastern Canada. We then conducted immune response bioassays on 1237 individuals from 63 odonate species. We used generalized additive models and structural equation models to relate immune responses to host body size, parasite load, pH, temperature and precipitation while accounting for spatial autocorrelation in immune ability and evolutionary relationships among host species. We found significant differences in the strength of immune response among host individuals, and this variation was best explained by climatic conditions, specifically strongly decreasing with precipitation. While host species significantly differed in immune response strength, we found no effect of host body size, evolutionary relationships among hosts, or parasitism on immune response. Our study investigating the drivers of immune response across dozens of species spread across two biomes is the most comprehensive to date. Climatic conditions have a strong influence on host immune response, regardless of host characteristics or parasitism rates. Strong immune responses were associated with low levels of annual precipitation, which could relate to the role of cuticular melanin content in desiccation resistance, and the melanin-based encapsulation response being a byproduct of this adaptation. A spatially explicit understanding of the biological processes affecting immunity could improve epidemiological models of disease risk that inform disease management globally.


Predicting parasite and pathogen spread is increasingly relevant and challenging in a highly connected world (Tsiotas and Tselios 2022), and an animal’s immune system is the first line of defense against attack by parasites and pathogens. Yet, the factors driving variation in immunity among individuals, populations, and species are poorly studied and rarely factored into epidemiologic models (Becker et al. 2019). Characteristics of the host, exposure to parasites or pathogens, and the abiotic environment can interact in complex ways to affect immunity (Sweeny and Albery 2022), but their interactions are challenging to elucidate (Johnson et al. 2019).

As the immune system is the primary line of defense against infection by parasites, pathogens, and disease, it is assumed to be costly in terms of fitness and should therefore lead to tradeoffs with life-history traits (e.g. fecundity, fertility, Albery et al. 2021). Although a plethora of studies have provided key evidence of immune variation due to such tradeoffs, most studies emphasize the role of biotic factors such as predation (Duong and McCauley 2016) and resource availability (Hasik et al. 2025a) without considering that of abiotic factors (Lazzaro and Little 2008). A relationship between immune response and temperature is expected in both invertebrate ectotherms (Mastore et al. 2019) and vertebrate endotherms (Butler et al. 2013), due to the thermal sensitivity of the enzymes involved in immune responses (Catalán et al. 2012). When one scales this temperature-dependent immunity to explore the effect of climate (specifically, temperature and humidity), then climate is expected to be a clear driver of geographic variation in immunity (Li et al. 2024).

Parasites are a leading cause of disease and death around the world and thus are drivers of life-history evolution via their effects on host fitness (Hasik and Siepielski 2022a) that have the potential to affect host macroevolutionary dynamics (Hasik et al. 2025b). The majority of organisms on earth are infected by at least one parasite (Price 1980), and yet, we have a very limited understanding of the multifarious factors governing the intensity of infection and, therefore, the health cost. Immune responses are necessary to defend organisms from the deleterious and fitness-reducing effects of parasites (and disease in general, Hasik and Siepielski 2022a). Although there is increasing interest in the macroecology of parasites and infectious diseases (Stephens et al. 2016), we know very little about macroimmunology (Becker et al. 2020). Both among-individual and interspecific variation in immune response surely plays a central role, but the factors regulating immunity in natural settings are poorly understood, which can interfere with the accuracy of predictive epidemiologic models. Environmental factors and local parasite pressure can independently drive differences in immunity across space, but they could also act in concert (Becker et al. 2020). Parasitism varies among host populations distributed across large-scale environmental gradients (LoScerbo et al. 2020, Hasik and Siepielski 2022b) and at fine spatial scales, within populations (Albery et al. 2019, Hasik et al. 2025a). To date, however, the focus on a limited set of taxa, specifically vertebrates (Becker et al. 2020), limits our ability to identify generalities regarding the relative influence of environmental conditions and parasitism on immune defenses that would apply across host–parasite systems (Rolff and Siva-Jothy 2003).

/* */