Toggle light / dark theme

Biomimetic multimodal tactile sensing enables human-like robotic perception

Robots That Feel: A New Multimodal Touch System Closes the Gap with Human Perception.

In a major advance for robotic sensing, researchers have engineered a biomimetic tactile system that brings robots closer than ever to human-like touch. Unlike traditional tactile sensors that detect only force or pressure, this new platform integrates multiple sensing modalities into a single ultra-thin skin and combines it with large-scale AI for data interpretation.

At the heart of the system is SuperTac, a 1-millimeter-thick multimodal tactile layer inspired by the multispectral structure of pigeon vision. SuperTac compresses several physical sensing modalities — including multispectral optical imaging (from ultraviolet to mid-infrared), triboelectric contact sensing, and inertial measurements — into a compact, flexible skin. This enables simultaneous detection of force, contact position, texture, material, temperature, proximity and vibration with micrometer-level spatial precision. The sensor achieves better than 94% accuracy in classifying complex tactile features such as texture, material type, and slip dynamics.

However, the hardware alone isn’t enough: rich, multimodal tactile data need interpretation. To address this, the team developed DOVE, an 8.5-billion-parameter tactile language model that functions as a computational interpreter of touch. By learning patterns in the high-dimensional sensor outputs, DOVE provides semantic understanding of tactile interactions — a form of “touch reasoning” that goes beyond raw signal acquisition.

From a neurotech-inspired perspective, this work mirrors principles of biological somatosensation: multiple receptor types working in parallel, dense spatial encoding, and higher-order processing for perceptual meaning. Integrating rich physical sensing with model-based interpretation is akin to how the somatosensory cortex integrates mechanoreceptor inputs into coherent percepts of texture, shape and motion. Such hardware-software co-design — where advanced materials, optics, electronics and AI converge — offers a pathway toward embodied intelligence in machines that feel and interpret touch much like biological organisms do.

Biomimetic multimodal tactile sensing enables human-like robotic perception.


Analog hardware may solve Internet of Things’ speed bumps and bottlenecks

The ubiquity of smart devices—not just phones and watches, but lights, refrigerators, doorbells and more, all constantly recording and transmitting data—is creating massive volumes of digital information that drain energy and slow data transmission speeds. With the rising use of artificial intelligence in industries ranging from health care and finance to transportation and manufacturing, addressing the issue is becoming more pressing.

A research team led by the University of Massachusetts Amherst aims to address the problem with new technology that uses old-school analog computing: an electrical component known as a memristor.

“Certainly, our society is more and more connected, and the number of those devices is increasing exponentially,” says Qiangfei Xia, the Dev and Linda Gupta professor in the Riccio College of Engineering at UMass Amherst. “If everyone is collecting and processing data the old way, the amount of data is going to be exploding. We cannot handle that anymore.”

To explain or not? Online dating experiment shows need for AI transparency depends on user expectation

Artificial intelligence (AI) is said to be a “black box,” with its logic obscured from human understanding—but how much does the average user actually care to know how AI works?

It depends on the extent to which a system meets users’ expectations, according to a new study by a team that includes Penn State researchers. Using a fabricated algorithm-driven dating website, the team found that whether the system met, exceeded or fell short of user expectations directly corresponded to how much the user trusted the AI and wanted to know about how it worked.

The findings are published in the journal Computers in Human Behavior.

Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs

Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization.

Zafran Security said the high-severity flaws, collectively dubbed ChainLeak, could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications.

Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week. It has attracted a total of 7.3 million downloads to date.

New Android malware uses AI to click on hidden browser ads

A new family of Android click-fraud trojans leverages TensorFlow machine learning models to automatically detect and interact with specific advertisement elements.

The mechanism relies on visual analysis based on machine learning instead of predefined JavaScript click routines, and does not involve script-based DOM-level interaction like classic click-fraud trojans.

The threat actor is using TensorFlow.js, an open-source library developed by Google for training and deploying machine learning models in JavaScript. It permits running AI models in browsers or on servers using Node.js.

Chinese military says it is developing over 10 quantum warfare weapons

China’s military says it is using quantum technology to gather high-value military intelligence from public cyberspace.

The People’s Liberation Army said more than 10 experimental quantum cyber warfare tools were “under development”, many of which were being “tested in front-line missions”, according to the official newspaper Science and Technology Daily.

The project is being led by a supercomputing laboratory at the National University of Defence Technology, according to the report, with a focus on cloud computing, artificial intelligence and quantum technology.

‘Largest Infrastructure Buildout in Human History’: Jensen Huang on AI’s ‘Five-Layer Cake’ at Davos

From skilled trades to startups, AI’s rapid expansion is the beginning of the next massive computing platform shift, and for the world’s workforce, a move from tasks to purpose.

At a packed mainstage session at the annual meeting of the World Economic Forum in Davos, Switzerland, NVIDIA founder and CEO Jensen Huang described artificial intelligence as the foundation of what he called “the largest infrastructure buildout in human history,” driving job creation across the global economy.

Speaking with BlackRock CEO Larry Fink, Huang framed AI not as a single technology but as a “a five-layer cake,” spanning energy, chips and computing infrastructure, cloud data centers, AI models and, ultimately, the application layer.

Meet the new biologists treating LLMs like aliens

How large is a large language model? Think about it this way.

In the center of San Francisco there’s a hill called Twin Peaks from which you can view nearly the entire city. Picture all of it—every block and intersection, every neighborhood and park, as far as you can see—covered in sheets of paper. Now picture that paper filled with numbers.

That’s one way to visualize a large language model, or at least a medium-size one: Printed out in 14-point type, a 200-­billion-parameter model, such as GPT4o (released by OpenAI in 2024), could fill 46 square miles of paper—roughly enough to cover San Francisco. The largest models would cover the city of Los Angeles.

We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do—not even the people who help build them. “You can never really fully grasp it in a human brain,” says Dan Mossing, a research scientist at OpenAI.

That’s a problem. Even though nobody fully understands how it works—and thus exactly what its limitations might be—hundreds of millions of people now use this technology every day. If nobody knows how or why models spit out what they do, it’s hard to get a grip on their hallucinations or set up effective guardrails to keep them in check. It’s hard to know when (and when not) to trust them.

Whether you think the risks are existential—as many of the researchers driven to understand this technology do—or more mundane, such as the immediate danger that these models might push misinformation or seduce vulnerable people into harmful relationships, understanding how large language models work is more essential than ever.


Using AI to understand how emotions are formed

Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring whether the formation of emotions can be computationally modeled, providing machines with a deeper, more human-like understanding of emotional states.

In this vein, Assistant Professor Chie Hieida from the Nara Institute of Science and Technology (NAIST), Japan, in collaboration with Assistant Professor Kazuki Miyazawa and then-master’s student Kazuki Tsurumaki from Osaka University, Japan, explore computational approaches to model the formation of emotions.

The team built a computational model that aims to explain how humans may form the concept of emotion. The study was published in the journal IEEE Transactions on Affective Computing.

/* */