Toggle light / dark theme

Ava Community Energy just rolled out a new program in California that pays EV and plug-in hybrid drivers for charging their cars when electricity on the grid is cleaner and cheaper.

The new Ava SmartHome Charging program, launched in partnership with home energy analytics platform Optiwatt, offers up to $100 in incentives in the first year. And because the program helps shift home charging to lower-cost hours, Ava says drivers could save around $140 a year on their energy bills.

EV and PHEV owners who are Ava customers can download the Optiwatt app for free, connect their vehicle, and let the app handle the rest. The app uses an algorithm to automatically schedule charging when demand is low and more renewable energy is available, typically overnight or during off-peak hours.

A mathematician has developed an algebraic solution to an equation that was long thought to be unsolvable. A groundbreaking discovery from a UNSW Sydney mathematician may finally offer a solution to one of algebra’s toughest problems: how to solve high-degree polynomial equations. Polynomials

A new study proposes that quantum information, encoded in entanglement entropy, directly shapes the fabric of spacetime, offering a fresh path toward unifying gravity and quantum mechanics.

Published in Annals of Physics, the paper presents a reformulation of Einstein’s field equations, arguing that gravity is not just a response to mass and energy, but also to the information structure of quantum fields. This shift, if validated, would mark a fundamental transformation in how physicists understand both gravity and quantum computing.

The study, published by Florian Neukart, of the Leiden Institute of Advanced Computer Science, Leiden University and Chief Product Officer of Terra Quantum, introduces the concept of an “informational stress-energy tensor” derived from quantum entanglement entropy.

In the domain of artificial intelligence, human ingenuity has birthed entities capable of feats once relegated to science fiction. Yet within this triumph of creation resides a profound paradox: we have designed systems whose inner workings often elude our understanding. Like medieval alchemists who could transform substances without grasping the underlying chemistry, we stand before our algorithmic progeny with a similar mixture of wonder and bewilderment. This is the essence of the “black box” problem in AI — a philosophical and technical conundrum that cuts to the heart of our relationship with the machines we’ve created.

The term “black box” originates from systems theory, where it describes a device or system analyzed solely in terms of its inputs and outputs, with no knowledge of its internal workings. When applied to artificial intelligence, particularly to modern deep learning systems, the metaphor becomes startlingly apt. We feed these systems data, they produce results, but the transformative processes occurring between remain largely opaque. As Pedro Domingos (2015) eloquently states in his seminal work The Master Algorithm: “Machine learning is like farming. The machine learning expert is like a farmer who plants the seeds (the algorithm and the data), harvests the crop (the classifier), and sells it to consumers, without necessarily understanding the biological mechanisms of growth” (p. 78).

This agricultural metaphor points to a radical reconceptualization in how we create computational systems. Traditionally, software engineering has followed a constructivist approach — architects design systems by explicitly coding rules and behaviors. Yet modern AI systems, particularly neural networks, operate differently. Rather than being built piece by piece with predetermined functions, they develop their capabilities through exposure to data and feedback mechanisms. This observation led AI researcher Andrej Karpathy (2017) to assert that “neural networks are not ‘programmed’ in the traditional sense, but grown, trained, and evolved.”

Tuochao Chen, a University of Washington doctoral student, recently toured a museum in Mexico. Chen doesn’t speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum’s relative quiet, the surrounding noise was too much. The resulting text was useless.

Various technologies have emerged lately promising fluent translation, but none of these solved Chen’s problem of . Meta’s new glasses, for instance, function only with an isolated speaker; they play an automated voice translation after the speaker finishes.

Now, Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people’s voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-canceling headphones fitted with microphones. The team’s algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2–4 second delay.

Computer simulations help materials scientists and biochemists study the motion of macromolecules, advancing the development of new drugs and sustainable materials. However, these simulations pose a challenge for even the most powerful supercomputers.

A University of Oregon graduate student has developed a new mathematical equation that significantly improves the accuracy of the simplified computer models used to study the motion and behavior of large molecules such as proteins, and synthetic materials such as plastics.

The breakthrough, published last month in Physical Review Letters, enhances researchers’ ability to investigate the motion of large molecules in complex biological processes, such as DNA replication. It could aid in understanding diseases linked to errors in such replication, potentially leading to new diagnostic and therapeutic strategies.

Eyes may be the window to the soul, but a person’s biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm called “FaceAge” that uses a photo of a person’s face to predict biological age and survival outcomes for patients with cancer.

They found that patients with , on average, had a higher FaceAge than those without and appeared about five years older than their .

Older FaceAge predictions were associated with worse overall across multiple cancer types. They also found that FaceAge outperformed clinicians in predicting short-term life expectancies of patients receiving palliative radiotherapy.

What happens when AI starts improving itself without human input? Self-improving AI agents are evolving faster than anyone predicted—rewriting their own code, learning from mistakes, and inching closer to surpassing giants like OpenAI. This isn’t science fiction; it’s the AI singularity’s opening act, and the stakes couldn’t be higher.

How do self-improving agents work? Unlike static models such as GPT-4, these systems use recursive self-improvement—analyzing their flaws, generating smarter algorithms, and iterating endlessly. Projects like AutoGPT and BabyAGI already demonstrate eerie autonomy, from debugging code to launching micro-businesses. We’ll dissect their architecture and compare them to OpenAI’s human-dependent models. Spoiler: The gap is narrowing fast.

Why is OpenAI sweating? While OpenAI focuses on safety and scalability, self-improving agents prioritize raw, exponential growth. Imagine an AI that optimizes itself 24/7, mastering quantum computing over a weekend or cracking protein folding in hours. But there’s a dark side: no “off switch,” biased self-modifications, and the risk of uncontrolled superintelligence.

Who will dominate the AI race? We’ll explore leaked research, ethical debates, and the critical question: Can OpenAI’s cautious approach outpace agents that learn to outthink their creators? Like, subscribe, and hit the bell—the future of AI is rewriting itself.

Can self-improving AI surpass OpenAI? What are autonomous AI agents? How dangerous is recursive AI? Will AI become uncontrollable? Can we stop self-improving AI? This video exposes the truth. Watch now—before the machines outpace us.

#ai.

It’s easy to take joint mobility for granted. Without thinking, it’s simple enough to turn the pages of a book or bend to stretch out a sore muscle. Designers don’t have the same luxury. When building a joint, be it for a robot or wrist brace, designers seek customizability across all degrees of freedom but are often restricted by their versatility to adapt to different use contexts.

Researchers at Carnegie Mellon University’s College of Engineering have developed an algorithm to design metastructures that are reconfigurable across six degrees of freedom and allow for stiffness tunability. The algorithm can interpret the kinematic motions that are needed for multiple configurations of a device and assist designers in creating such reconfigurability. This advancement gives designers more over the functionality of joints for various applications.

The team demonstrated the structure’s versatile capabilities via multiple wearable devices tailored for unique movement functions, body areas, and uses.