Toggle light / dark theme

Very soon after the Big Bang, the universe enjoyed a brief phase where quarks and gluons roamed freely, not yet joined up into hadrons such as protons, neutrons and mesons. This state, called a quark-gluon plasma, existed for a brief time until the temperature dropped to about 20 trillion Kelvin, after which this “hadronization” took place.

Now a research group from Italy has presented new calculations of the plasma’s equation of state that show how important the strong force was before the hadrons formed. Their work is published in Physical Review Letters.

The equation of state of quantum chromodynamics (QCD) represents the collective behavior of particles that experience the strong force—a gas of strongly interacting particles at equilibrium, with its numbers and net energy unchanging. It’s analogous to the well-known, simple equation of state of atoms in a gas, PV=nRT, but can’t be so simply summarized.

As artificial intelligence takes off, how do we efficiently integrate it into our lives and our work? Bridging the gap between promise and practice, Jann Spiess, an associate professor of operations, information, and technology at Stanford Graduate School of Business, is exploring how algorithms can be designed to most effectively support—rather than replace—human decision-makers.

This research, published on the arXiv preprint server, is particularly pertinent as prediction machines are integrated into real-world applications. Mounting suggests that high-stakes decisions made with AI assistance are often no better than those made without it.

From credit reports, where an overreliance on AI may lead to misinterpretation of risk scores, to , where models may depend on certain words to flag toxicity, leading to misclassifications—successful implementation lags behind the technology’s remarkable capabilities.

Long-read sequencing technologies analyze long, continuous stretches of DNA. These methods have the potential to improve researchers’ ability to detect complex genetic alterations in cancer genomes. However, the complex structure of cancer genomes means that standard analysis tools, including existing methods specifically developed to analyze long-read sequencing data, often fall short, leading to false-positive results and unreliable interpretations of the data.

These misleading results can compromise our understanding of how tumors evolve, respond to treatment, and ultimately how patients are diagnosed and treated.

To address this challenge, researchers developed SAVANA, a new algorithm which they describe in the journal Nature Methods.

The advancement of artificial intelligence (AI) and the study of neurobiological processes are deeply interlinked, as a deeper understanding of the former can yield valuable insight about the other, and vice versa. Recent neuroscience studies have found that mental state transitions, such as the transition from wakefulness to slow-wave sleep and then to rapid eye movement (REM) sleep, modulate temporary interactions in a class of neurons known as layer 5 pyramidal two-point neurons (TPNs), aligning them with a person’s mental states.

These are interactions between information originating from the external world, broadly referred to as the receptive field (RF1), and inputs emerging from internal states, referred to as the contextual field (CF2). Past findings suggest that RF1 and CF2 inputs are processed at two distinct sites within the neurons, known as the basal site and apical site, respectively.

Current AI algorithms employing attention mechanisms, such as transformers, perceiver and flamingo models, are inspired by the capabilities of the human brain. In their current form, however, they do not reliably emulate high-level perceptual processing and the imaginative states experienced by humans.

A team of researchers at AI Google Quantum AI, led by Craig Gidney, has outlined advances in quantum computer algorithms and error correction methods that could allow such computers to crack Rivest–Shamir–Adleman (RSA) encryption keys with far fewer resources than previously thought. The development, the team notes, suggests encryption experts need to begin work toward developing next-generation encryption techniques. The paper is published on the arXiv preprint server.

RSA is an encryption technique developed in the late 1970s that involves generating public and private keys; the former is used for encryption and the latter decryption. Current standards call for using a 2,048-bit encryption key. Over the past several years, research has suggested that quantum computers would one day be able to crack RSA encryption, but because quantum development has been slow, researchers believed that it would be many years before it came to pass.

Some in the field have accepted a theory that a quantum computer capable of cracking such codes in a reasonable amount of time would have to have at least 20 million qubits. In this new work, the team at Google suggests it could theoretically be done with as few as a million qubits—and it could be done in a week.

A research team from the University of South China has developed a set of algorithms to help optimize radiation-shielding design for new types of nuclear reactors.

Their achievement, which was published in the journal of Nuclear Science and Techniques and shared by TechXplore, will help engineers meet the difficult demands for next-gen reactors, including transportable models, as well as those intended for marine and space environments.

Safety is of paramount concern when it comes to nuclear energy, especially considering the public’s perception of this clean energy source following some notable accidents over the past 68 years.

Machine-learning algorithms can now estimate the “brain age” of infants with unprecedented precision by analyzing electrical brain signals recorded using electroencephalography (EEG).

A team led by Sarah Lippé at Université de Montréal’s Department of Psychology has developed a method that can determine in minutes whether a baby’s brain development is advanced, delayed or in line with their chronological age.

This breakthrough promises to enable early screening and personalized monitoring of developmental disorders in babies.

Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.

The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.

Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.

Kirigami is a traditional Japanese art form that entails cutting and folding paper to produce complex three-dimensional (3D) structures or objects. Over the past decades, this creative practice has also been applied in the context of physics, engineering, and materials science research to create new materials, devices and even robotic systems.

Researchers at Sichuan University and McGill University recently devised a new approach for the inverse engineering of kirigami, which does not rely on advanced computational tools and numerical algorithms. This new method, outlined in a paper published in Physical Review Letters, could simplify the design of intricate kirigami for a wide range of real-world applications.

“This work is a natural extension of our previous work on kirigami,” Damiano Pasini, senior corresponding author of the paper, told Phys.org.