Toggle light / dark theme

HarmonyGNN boosts graph AI accuracy on four tough benchmarks by up to 9.6%

Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).

For example, in a graph of a neural system there would be edges between nodes representing two neurons that enhance each other, but there would also be edges between nodes that suppress each other.

Because graphs can be used to represent everything from social networks to molecular structure, GNNS are able to capture complex relationships better than many other types of AI systems.

In Active Solids, Connectivity Is as Important as Activity

A robotic metamaterial shows that the odd mechanics of active solids depend on how the active constituents connect across the system.

Active materials, composed of microscopic constituents that continuously inject motional energy into the system, can exhibit odd mechanical responses, such as stretching vertically when sheared horizontally. Such properties can be used to make materials that can spontaneously crawl or roll over a difficult terrain [1]. One might naively think that these desirable odd responses could be increased by making the components more active. Jack Binysh of the University of Amsterdam and his colleagues now find that this doesn’t always work [2]. The researchers show that in active solids a collective response only emerges when system-spanning connective networks are formed among the individual constituents of the system. Without such networks, the effects of microscopic activity remain confined locally and the macroscopic response disappears.

An active solid is, fundamentally, an elastic lattice made up of self-driving constituents. Examples include robotic lattices composed of motorized units [1, 2], magnetic colloidal crystals [3], and chiral living embryos [4]. The active solids that Binysh and his colleagues examined are examples of nonreciprocal active solids, meaning that the interactions between elements are directional. Interactions may become directional when individual constituents process information about their neighbors. Such nonreciprocal interactions arise in a wide range of settings. In robotic metamaterials, local control loops impose directional responses on adjacent mechanical units [1]. And in living chiral collectives, hydrodynamic flows allow rotating embryos to exchange momentum with the surrounding media [4].

Retinal Vessel Dysfunction in Cerebral Autosomal Dominant Arteriopathy With Subcortical Infarcts and Leukoencephalopathy

An Ultra-Widefield Fluorescein Angiography Study.


This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.

Toward a policy for machine-learning tools in kernel development

The first topic of discussion at the 2025 Maintainers Summit has been in the air for a while: what role — if any — should machine-learning-based tools have in the kernel development process? While there has been a fair amount of controversy around these tools, and concerns remain, it seems that the kernel community, or at least its high-level maintainership, is comfortable with these tools becoming a significant part of the development process.

Sasha Levin began the discussion by pointing to a summary he had sent to the mailing lists a few days before. There is some consensus, he said, that human accountability for patches is critical, and that use of a large language model in the creation of a patch does not change that. Purely machine-generated patches, without human involvement, are not welcome. Maintainers must retain the authority to accept or reject machine-generated contributions as they see fit. And, he said, there is agreement that the use of tools should be disclosed in some manner.

But, he asked the group: is there agreement in general that these tools are, in the end, just more tools? Steve Rostedt said that LLM-generated code may bring legal concerns that other tools do not raise, but Greg Kroah-Hartman answered that the current developers certificate of origin (“Signed-off-by”) process should cover the legal side of things. Rostedt agreed that the submitter is ultimately on the hook for the code they contribute, but he wondered about the possibility of some court ruling that a given model violates copyright years after the kernel had accepted code it generated. That would create the need for a significant cleanup effort.

AI Decoder Could Cut Quantum Errors by Up to 17×, Study Finds

Don’t listen to TLC. When it comes to error correction, in fact, do go chasing waterfalls.

A new study shows that artificial intelligence can unlock a “waterfall” effect in error correction, sharply reducing error rates and processing time.

Researchers from Harvard University reported in the pre-print server arXiv that they developed a neural-network-based decoder that outperforms existing methods by wide margins, while revealing a previously hidden regime of error suppression that challenges long-standing assumptions about how quantum systems scale.

Dr David Sinclair: Can Aging Be Reversed? After 8 Weeks, Cells Appeared 75% Younger In Tests!

Progress is accelerating but clarity isn’t always keeping up.

Check out our new sponsor, NADclinic at nadclinic.com. They are the one-stop-shop marketplace for longevity, and pioneers in NAD+ solutions.
From longevity and AI to the future of healthcare, innovation is moving fast but understanding is still catching up. The result is a growing tension between what’s being promised and what’s actually proven.

Today, David Ewing Duncan brings a grounded, big-picture perspective on these shifts. Drawing from his work at the intersection of science, technology, and human behavior, he explores why skepticism is rising, how hype can distort progress, and what it really means to live in an era of rapid innovation.

The conversation goes beyond longevity touching on self-awareness, the limits of current science, the role of AI, and how we can think more critically about the future we’re building.

Are we asking better questions or just chasing better tools?
David Ewing Duncan is an award-winning science journalist, bestselling author, and speaker known for exploring the intersection of health, technology, and the future of human life.

What You’ll Learn

A nanoscale robotic cleaner can hunt, capture and remove bacteria

Tiny robots—around 50 times smaller than the diameter of a human hair—open up fascinating possibilities: they enable the controlled manipulation of objects far too small for human hands. This brings us closer to a long-standing dream—the direct interaction with the microscopic world.

Particularly relevant are biological objects in aqueous environments, such as single cells or bacteria. Handling such objects in a controlled and targeted way has remained a major challenge.

A team of researchers have demonstrated how such microscopic cleaners can be employed and precisely controlled. The study is published in the journal Nature Communications. The nanorobots presented demonstrate that controlled manipulation, including collection and relocation of bacteria, is already achievable.

Physics-Informed LSTM for Fatigue Life Prediction of Rubber Isolators under Thermo-Mechanical Coupling

【】 Full article: (Authored by Shen Liu and Fei Meng, from University of Shanghai for Science and Technology, China.)

Rubber supports are essential in automotive, heavy machinery, and aerospace engineering. They offer excellent hyper elasticity, viscoelastic dissipation, and noise reduction. However, their fatigue evolution under coupled thermo-mechanical loading is exceptionally complex. This study develops an LSTM-Physics-Informed Neural Network (PINN) framework that integrates prior physical knowledge transfer with Partial Differential Equation (PDE) constraints, to address the challenge of predicting the fatigue life of rubber_isolators under thermo-mechanical-damage coupling.


Abstract

Rubber supports are ubiquitous in modern vibration isolation systems. Their fatigue evolution under coupled thermo-mechanical loading is exceptionally complex. Traditional life prediction methods rely heavily on empirical formulas. These methods often lack accuracy and extrapolation capabilities under varying temperatures. To address this, we propose a novel LSTM-PINN architecture. This framework integrates physical constitutive relations and temperature effects into a neural network. We used transfer learning to extract baseline physical data across wide temperature ranges. Long Short-Term Memory (LSTM) layers capture sequential loading features. We embedded partial differential equations (PDEs) into the loss function. These PDEs are based on strain energy density (SED) and Arrhenius thermodynamics. This approach ensures strict adherence to physical laws. Results demonstrate that LSTM-PINN achieves high precision even with small datasets. It also exhibits superior out-of-distribution (OOD) generalization. This framework provides a new paradigm for evaluating the reliability of rubber components.

Rubber Isolator, Fatigue Life, PINN, LSTM, Thermo–Mechanical Coupling

Artificial intelligence in cardiovascular imaging: risks, mitigations and the path to safe implementation

Artificial intelligence (AI) is rapidly transforming cardiovascular imaging by automating tasks such as image segmentation, feature extraction, and risk prediction — leading to significant improvements in diagnostic precision and efficiency. However, the integration of AI into clinical workflows comes with critical risks that must be addressed to ensure safe and reliable patient care.

This review explores the technical, clinical, and ethical challenges of AI in cardiovascular imaging, particularly highlighting the risks of model errors, data drift and inappropriate usage. We also examine concerns about explainability, the potential for deskilling of healthcare professionals, generalisability across diverse populations, and accountability in AI implementation.

We present real-world examples of where these risks have been realised, along with attempts at mitigations, including the adoption of explainable AI techniques, rigorous validation frameworks to ensure fairness and broad applicability, continuous performance monitoring, and transparency at every stage of model development and deployment.

/* */