Toggle light / dark theme

Novel AI tool opens 3D modeling to blind and low-vision programmers

Blind and low-vision programmers have long been locked out of three-dimensional modeling software, which depends on sighted users dragging, rotating and inspecting shapes on screen.

Now, a multiuniversity research team has developed A11yShape, a new tool designed to help blind and low-vision programmers independently create, inspect and refine three-dimensional models. The study is published on the arXiv preprint server.

The team consists of Anhong Guo, assistant professor of electrical engineering and computer science at the University of Michigan, and researchers from the University of Texas at Dallas, University of Washington, Purdue University and several partner institutions—including Gene S-H Kim of Stanford University, a member of the blind and low-vision community.

“The Embodied Mind of a New Robot Scientist” by Michael Levin

This is a ~58 minute talk titled “The Embodied Mind of a New Robot Scientist: symmetries between AI and bioengineering the agential material of life and their impact on technology and on our future” which I gave as a closing Keynote to the ALIFE conference in Japan (https://2025.alife.org/). This is a different talk than any I’ve done before, in that besides going over the remarkable capacities of living material, I discuss 1) the symmetries between how all agents navigate their world and how science discoveries are made, and 2) a new robot scientist platform that we have created. With respect to the latter, I discuss how the body and mind of this new embodied AI can serve as a translation and integration layer between human scientists and living matter such as the cells which make up Xenobots.

Google is powering Belgium’s digital future with a two-year €5 billion investment in AI infrastructure

Google is investing an additional €5 billion in Belgium over the next two years to expand its cloud and AI infrastructure. This includes expansions of our data center campuses in Saint-Ghislain and will add another 300 full time jobs. We’ve also announced new agreements with Eneco, Luminus and Renner which will support the development of new onshore wind farms and support the grid with clean energy.

Our commitment goes beyond infrastructure. We’re also equipping Belgians with the skills needed to thrive in an AI-driven economy, at no cost and will fund non-profits to provide free, practical AI training for low-skilled workers.

This is an extraordinary time for European innovation and its digital and economic future. Google is deepening its roots in Belgium and investing in its residents to unlock significant economic opportunities for the country, helping to ensure it remains a leader in technology and AI.

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

Ready to become a certified watsonx AI Assistant Engineer v1? Register now and use code IBMTechYT20 for 20% off of your exam → https://ibm.biz/BdeNSk.

Learn more about Limits of Generative AI here → https://ibm.biz/BdeNSt.

🤖 How far can AI go? Jeff Crume examines generative AI, NLP, and AGI, unpacking solved milestones like reasoning and creativity while tackling ongoing challenges like hallucinations and sustainability. Learn about the limits of AI and its role alongside humans in shaping the future.

AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → https://ibm.biz/Bde7Sz.

#ailimit #futureofai #aievolution

Hardware vulnerability allows attackers to hack AI training data

Researchers from NC State University have identified the first hardware vulnerability that allows attackers to compromise the data privacy of artificial intelligence (AI) users by exploiting the physical hardware on which AI is run.

The paper, “GATEBLEED: A Timing-Only Membership Inference Attack, MoE-Routing Inference, and a Stealthy, Generic Magnifier Via Hardware Power Gating in AI Accelerators,” will be presented at the IEEE/ACM International Symposium on Microarchitecture (MICRO 2025), being held Oct. 18–22 in Seoul, South Korea. The paper is currently available on the arXiv preprint server.

“What we’ve discovered is an AI privacy attack,” says Joshua Kalyanapu, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Security attacks refer to stealing things actually stored somewhere in a system’s memory—such as stealing an AI model itself or stealing the hyperparameters of the model. That’s not what we found. Privacy attacks steal stuff not actually stored on the system, such as the data used to train the model and attributes of the data input to the model. These facts are leaked through the behavior of the AI model. What we found is the first vulnerability that allows successfully attacking AI privacy via hardware.”

Theoretical Foundations of Artificial General Intelligence

This book is a collection of writings by active researchers in the field of Artificial General Intelligence, on topics of central importance in the field. Each chapter focuses on one theoretical problem, proposes a novel solution, and is written in sufficiently non-technical language to be understandable by advanced undergraduates or scientists in allied fields.

This book is the very first collection in the field of Artificial General Intelligence (AGI) focusing on theoretical, conceptual, and philosophical issues in the creation of thinking machines. All the authors are researchers actively developing AGI projects, thus distinguishing the book from much of the theoretical cognitive science and AI literature, which is generally quite divorced from practical AGI system building issues.

/* */