As demand surges for batteries that store more energy and last longer—powering electric vehicles, drones, and energy storage systems—a team of South Korean researchers has introduced an approach to overcome a major limitation of conventional lithium-ion batteries (LIBs): unstable interfaces between electrodes and electrolytes.
Most of today’s consumer electronics—such as smartphones and laptops—rely on graphite-based batteries. While graphite offers long-term stability, it falls short in energy capacity.
Silicon, by contrast, can store nearly 10 times more lithium ions, making it a promising next-generation anode material. However, silicon’s main drawback is its dramatic volume expansion and contraction during charge and discharge, swelling up to three times its original size.
These AI’s won’t just respond to prompts – although they will do that too – they are working in the background acting on your behalf, pursuing your goals with independence and competence.
Your main interface to the world will, as it is today, be a device; a smartphone or whatever replaces it. It will host your personal AI agent, not a stripped-down thing with limited capabilities of knowledge. It’s a sophisticated model, more capable than GPT-4 is today. It’ll run locally and privately, so all your core interactions are yours and yours alone. It will be a digital Chief of Staff, an extension of your will, with a separate initiative.
In Alan Kay’s visionary Knowledge Navigator video from 1987, we saw an early, eerily prescient depiction of an AI-powered assistant: a personable digital agent helping a university professor manage his day. It converses naturally, juggles scheduling conflicts, surfaces relevant academic research, and even initiates a video call with a colleague — all through a touchscreen interface with a calm, competent virtual presence.
Apple is making progress on a standard for brain implant devices that can help people with disabilities control devices such as iPhones with their thoughts. As reported in The Wall Street Journal, Apple has plans to release that standard to other developers later this year.
The company has partnered with Synchron, which has been working with other companies, including Amazon, on ways to make devices more accessible. Synchron makes an implant called a Stentrode that is implanted in a vein on the brain’s motor cortex. Once implanted, the Stentrode can read brain signals and translate that to movement on devices including iPhones, iPads and Apple’s Vision Pro VR headset.
As we saw last year, a patient with ALS testing the Synchron technology was able to navigate menus in the Vision Pro device and use it to experience the Swiss Alps in VR. The technology could become more widely available to people with paralysis. The company has a community portal for those interested in learning about future tests.
A new AI model from Tokyo called the Continuous Thought Machine mimics how the human brain works by thinking in real-time “ticks” instead of layers. Built by Sakana, this brain-inspired AI allows each neuron to decide when it’s done thinking, showing signs of what experts call proximate consciousness. With no fixed depth and a flexible thinking process, it marks a major shift away from traditional Transformer models in artificial intelligence.
How Sakana’s AI mimics human neurons and rewrites how machines process thought
Why Abacus’ Deep Agent now acts like a fully autonomous digital worker
How Alibaba trains top AI models without using live search engines
Why Google let Honor debut its video AI before its own users
What Tencent’s face-swapping tech means for the future of video generation
How iPhones will soon think ahead to save power
Why Saudi Arabia’s GPU superpower plan could shake the entire AI industry
📊 Why It Matters: AI is breaking out of the lab—thinking like brains, automating your work, and reshaping global power. From self-regulating neurons to trillion-dollar GPU wars, this is where the future starts. #ai #robotics #consciousai. Honor phones run Google’s new Veo 2 model before Pixel even gets access. https://shorturl.at/Ki0YP Tencent drops a deepfake engine with shocking face accuracy. https://github.com/Tencent/HunyuanCustom. Apple uses on-device AI in iOS 19 to predict and extend battery life. https://www.theverge.com/news/665249/.… Saudi Arabia launches a \$940B AI empire with support from Musk and Altman. https://techcrunch.com/2025/05/12/sau…
🎥 What You’ll See: How Sakana’s AI mimics human neurons and rewrites how machines process thought. Why Abacus’ Deep Agent now acts like a fully autonomous digital worker. How Alibaba trains top AI models without using live search engines. Why Google let Honor debut its video AI before its own users. What Tencent’s face-swapping tech means for the future of video generation. How iPhones will soon think ahead to save power. Why Saudi Arabia’s GPU superpower plan could shake the entire AI industry.
📊 Why It Matters: AI is breaking out of the lab—thinking like brains, automating your work, and reshaping global power. From self-regulating neurons to trillion-dollar GPU wars, this is where the future starts.
Just 10 to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute for Brain Research. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.
Mindfulness is a state in which the mind is focused only on the present moment. It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises—and evidence is accumulating that practicing mindfulness has positive effects on mental health. The open-access study, reported April 8 in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for autistic adults.
“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern investigator and MIT Professor John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab.
AI is a computing tool. It can process and interrogate huge amounts of data, expand human creativity, generate new insights faster and help guide important decisions. It’s trained on human expertise, and in conservation that’s informed by interactions with local communities or governments—people whose needs must be taken into account in the solutions. How do we ensure this happens?
Last year, Reynolds joined 26 other conservation scientists and AI experts in an “Horizon Scan”—an approach pioneered by Professor Bill Sutherland in the Department of Zoology—to think about the ways AI could revolutionize the success of global biodiversity conservation. The international panel agreed on the top 21 ideas, chosen from a longlist of 104, which are published in the journal Trends in Ecology and Evolution.
Some of the ideas extrapolate from AI tools many of us are familiar with, like phone apps that identify plants from photos, or birds from sound recordings. Being able to identify all the species in an ecosystem in real time, over long timescales, would enable a huge advance in understanding ecosystems and species distributions.
In a new Nature Communications study, researchers have developed an in-memory ferroelectric differentiator capable of performing calculations directly in the memory without requiring a separate processor.
The proposed differentiator promises energy efficiency, especially for edge devices like smartphones, autonomous vehicles, and security cameras.
Traditional approaches to tasks like image processing and motion detection involve multi-step energy-intensive processes. This begins with recording data, which is transmitted to a memory unit, which further transmits the data to a microcontroller unit to perform differential operations.
Tuochao Chen, a University of Washington doctoral student, recently toured a museum in Mexico. Chen doesn’t speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum’s relative quiet, the surrounding noise was too much. The resulting text was useless.
Various technologies have emerged lately promising fluent translation, but none of these solved Chen’s problem of public spaces. Meta’s new glasses, for instance, function only with an isolated speaker; they play an automated voice translation after the speaker finishes.
Now, Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people’s voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-canceling headphones fitted with microphones. The team’s algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2–4 second delay.