Toggle light / dark theme

New AI attack hides data-theft prompts in downscaled images

Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.

The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.

Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.

The pharma industry from Paul Janssen to today: why drugs got harder to develop and what we can do about it

Personal site for posts about my interests: the biotech industry, medicine, molecular biology, neuroscience, biorisk, science, consciousness, AI, innovation, decision making, philosophy, games, sci-fi, probability, and forecasting (among other things). I write to learn, mostly about biotech.

Is the AI boom finally starting to slow down?

“There’s a widening schism between the technologists who feel the A.G.I. – a mantra for believers who see themselves on the cusp of the technology – and members of the general public who are skeptical about the hype and see A.I. as a nuisance in their daily lives,” they wrote.

It’s unclear if the industry will take heed of these warnings. Investors look to every quarterly earnings report for signs that each company’s billions in capex spending is somehow being justified and executives are eager to give them hope. Boosting, boasting about and hyping the supposed promise and inevitability of AI is a big part of keeping investor concerns about the extra $10bn each company adds to its spending projections every quarter at bay. Mark Zuckerberg, for instance, recently said in the future if you’re not using AI glasses you’ll be at a cognitive disadvantage much like not wearing corrective lenses. That means tech firms such as Meta and Google will probably continue making the AI features that they offer today an almost inescapable part of using their products in a play to boost their training data and user numbers.

That said, the first big test of this AI reality check will come on Wednesday when chipmaker Nvidia – one of the building blocks of most LLMs – will report its latest earnings. Analysts seem pretty optimistic but after a shaky week for its stocks, investor reactions to Nvidia’s earnings and any updates on spending will be a strong signal of whether they have a continued appetite for the AI hype machine.

For the Singularity to Truly Arrive, We’d Need a Machine That Eats the Sun

However, if you’re rich and you don’t like the idea of a limit on computing, you can turn to futurism, longtermism, or “AI optimism,” depending on your favorite flavor. People in these camps believe in developing AI as fast as possible so we can (they claim) keep guardrails in place that will prevent AI from going rogue or becoming evil. (Today, people can’t seem to—or don’t want to—control whether or not their chatbots become racist, are “sensual” with children, or induce psychosis in the general population, but sure.)

The goal of these AI boosters is known as artificial general intelligence, or AGI. They theorize, or even hope for, an AI so powerful that it thinks like… well… a human mind whose ability is enhanced by a billion computers. If someone ever does develop an AGI that surpasses human intelligence, that moment is known as the AI singularity. (There are other, unrelated singularities in physics.) AI optimists want to accelerate the singularity and usher in this “godlike” AGI.

One of the key facts of computer logic is that, if you can slow the processes down enough and look at it in enough detail, you can track and predict every single thing that a program will do. Algorithms (and not the opaque AI kind) guide everything within a computer. Over the decades, experts have written the exact ways information can be sent, one bit—one minuscule electrical zap—at a time through a central processing unit (CPU).

“A Neural Cellular Automaton Model of Memory Transfer” by Etienne Guichard and Stefano Nichele

This is a ~1 hour 25 minute talk and Q&A discussion at our Center by Etienne Guichard (https://scholar.google.com/citations?user=FWNXN98AAAAJ&hl=en) and Stefano Nichele (https://www.nichele.eu/), titled “A Neural Cellular Automaton Model of Memory Transfer, with application to the ARC-AGI dataset”. Their preprint is here: https://arxiv.org/abs/2504.

/* */