Toggle light / dark theme

Entanglement is perhaps one of the most confusing aspects of quantum mechanics. On its surface, entanglement allows particles to communicate over vast distances instantly, apparently violating the speed of light. But while entangled particles are connected, they don’t necessarily share information between them.

In quantum mechanics, a particle isn’t really a particle. Instead of being a hard, solid, precise point, a particle is really a cloud of fuzzy probabilities, with those probabilities describing where we might find the particle when we go to actually look for it. But until we actually perform a measurement, we can’t exactly know everything we’d like to know about the particle.

These fuzzy probabilities are known as quantum states. In certain circumstances, we can connect two particles in a quantum way, so that a single mathematical equation describes both sets of probabilities simultaneously. When this happens, we say that the particles are entangled.

In recent years, roboticists have developed a wide range of systems that could eventually be introduced in health care and assisted living facilities. These include both medical robots and robots designed to provide companionship or assistance to human users.

Researchers at Shanghai Jiao Tong University and the University of Shanghai for Science and Technology recently developed a robotic system that could give human users a massage that employs traditional Chinese medicine (TCM) techniques. This new robot, introduced in a paper on the arXiv preprint server, could eventually be deployed in health care, wellness and rehabilitation facilities as additional therapeutic tools for patients who are experiencing different types of pain or discomfort.

“We adopt an adaptive admittance control algorithm to optimize force and position control, ensuring safety and comfort,” wrote Yuan Xu, Kui Huang, Weichao Guo and Leyi Du in their paper. “The paper analyzes key TCM techniques from kinematic and dynamic perspectives and designs to reproduce these massage techniques.”

Reliably measuring the polarization state of light is crucial for various technological applications, ranging from optical communication to biomedical imaging. Yet conventional polarimeters are made of bulky components, which makes them difficult to reduce in size and limits their widespread adoption.

Researchers at the Shanghai Institute of Technical Physics (SITP) of the Chinese Academy of Sciences and other institutes recently developed an on-chip full-Stokes polarimeter that could be easier to deploy on a large scale. Their device, presented in a paper in Nature Electronics, is based on optoelectronic eigenvectors, mathematical equations that represent the linear relationship between the incident Stokes vector and a detector’s photocurrent.

“This work was driven by the growing demand for compact, high-performance polarization analysis devices in optoelectronics,” Jing Zhou, corresponding author of the paper, told Phys.org. “Traditional polarimeters, which rely on discrete bulky optical components, present significant challenges to miniaturization and limit their broader applicability. Our main goal is to develop an on-chip solution capable of direct electrical readout to reconstruct full-Stokes polarization states.”

Quantum walks are a powerful theoretical model using quantum effects such as superposition, interference and entanglement to achieve computing power beyond classical methods.

A research team at the National Innovation Institute of Defense Technology from the Academy of Military Sciences (China) recently published a review article that thoroughly summarizes the theories and characteristics, physical implementations, applications and challenges of quantum walks and quantum walk computing. The review was published Nov. 13 in Intelligent Computing in an article titled “Quantum Walk Computing: Theory, Implementation, and Application.”

As quantum mechanical equivalents of classical random walks, quantum walks use quantum phenomena to design advanced algorithms for applications such as database search, network analysis and navigation, and . Different types of quantum walks include discrete-time quantum walks, continuous-time quantum walks, discontinuous quantum walks, and nonunitary quantum walks. Each model presents unique features and computational advantages.

Researchers at UC San Diego have developed SMART, a software package capable of realistically simulating cell-signaling networks.

This tool, tested across various biological systems, enhances the understanding of cellular responses and aids in advancing research in fields like systems biology and pharmacology.

Researchers at the University of California San Diego (UCSD) have developed and tested a new software tool called Spatial Modeling Algorithms for Reactions and Transport (SMART). This innovative software can accurately simulate cell-signaling networks — the intricate systems of molecular interactions that enable cells to respond to signals from their environment. These networks are complex due to the many steps involved and the three-dimensional shapes of cells and their components, making them challenging to model with existing tools. SMART addresses these challenges, promising to accelerate research in fields such as systems biology, pharmacology, and biomedical engineering.

Researchers at University of North Carolina at Chapel Hill and University of Maryland recently developed MyTimeMachine (MyTM), a new AI-powered method for personalized age transformation that can make human faces in images or videos appear younger or older, accounting for subjective factors influencing aging.

This algorithm, introduced in a paper posted to the arXiv preprint server, could be used to broaden or enhance the features of consumer-facing picture-editing platforms, but could also be a valuable tool for the film, TV and entertainment industries.

“Virtual aging techniques are widely used in (VFX) in movies, but they require good prosthetics and makeup, often tiresome and inconvenient for actors to wear regularly during shooting,” Roni Sengupta, the researcher who supervised the study, told Tech Xplore.

Researchers at University of California San Diego have developed and tested a new software package, called Spatial Modeling Algorithms for Reactions and Transport (SMART), that can realistically simulate cell-signaling networks—the complex systems of molecular interactions that allow cells to respond to diverse cues from their environment.

Cell-signaling networks involve many distinct steps and are also greatly influenced by the complex, three-dimensional shapes of cells and subcellular components, making them difficult to simulate with existing tools. SMART offers a solution to this problem, which could help accelerate research in fields across the life sciences, such as , pharmacology and .

The researchers successfully tested the new software in biological systems at several different scales, from cell signaling in response to adhesive cues, to calcium release events in subcellular regions of neurons and , to the production of ATP (the energy currency in cells) within a detailed representation of a single mitochondrion.

At night, charged particles from the sun caught by Earth’s magnetosphere rain down into the atmosphere. The impacting particles rip electrons from atoms in the atmosphere, creating both beauty and chaos. These high-energy interactions cause the northern and southern lights, but they also scatter radio signals, wreaking havoc on ground-based and satellite communications.

Scientists would like to track electrical activity in the ionosphere by measuring the distribution of plasma, the form matter takes when positive ions are separated from their electrons, to help better predict how communications will be affected by electromagnetic energy.

But analyzing plasma in the ionosphere is a challenge because its distribution changes quickly and its movements are often unpredictable. In addition, collisional physics makes detecting true motion in the lower ionosphere exceedingly difficult.

A new visual recognition approach improved a machine learning technique’s ability to both identify an object and how it is oriented in space, according to a study presented in October at the European Conference on Computer Vision in Milan, Italy.

Self-supervised learning is a machine learning approach that trains on unlabeled data, extending generalizability to real-world data. While it excels at identifying objects, a task called semantic classification, it may struggle to recognize objects in new poses.

This weakness quickly becomes a problem in situations like autonomous vehicle navigation, where an algorithm must assess whether an approaching car is a head-on collision threat or side-oriented and just passing by.