Toggle light / dark theme

Trying to spot contraband is a tricky business. Not only is identifying items like narcotics and counterfeit merchandise difficult, but the current most used technology—X-rays—only gives a 2D view, and often a muddy one at that.

“It’s not like X-raying a tooth, where you just have a tooth,” said Eric Miller, professor of electrical and computer engineering at Tufts. Instead, it’s like X-raying a tooth and getting the entire dental exam room.

But Miller and his research team have now found a possible solution that uses AI with to spot items that shouldn’t be there and is accurate 98% of the time. Their findings were published in Engineering Applications of Artificial Intelligence.

A team of math and AI researchers at Microsoft Asia has designed and developed a small language model (SLM) that can be used to solve math problems. The group has posted a paper on the arXiv preprint server outlining the technology and math behind the new tool and how well it has performed on standard benchmarks.

Over the past several years, multiple have been working hard to steadily improve their LLMs, resulting in AI products that have in a very short time become mainstream. Unfortunately, such tools require massive amounts of computer power, which means they consume a lot of electricity, making them expensive to maintain.

Because of that, some in the field have been turning to SLMs, which as their name implies, are smaller and thus far less resource intensive. Some are small enough to run on a local device. One of the main ways AI researchers make the best use of SLMs is by narrowing their focus—instead of trying to answer any question about anything, they are designed to answer questions about something much more specific—like math. In this new effort, Microsoft has focused its efforts on not just solving , but also in teaching an SLM how to reason its way through a problem.

Premiere starts in 10 minutes!


Join aerospace engineer Mike DiVerde for the latest updates from NASA’s Mars rovers! Get an insider’s look at Curiosity’s challenging journey through Gale Crater’s rocky terrain and Perseverance’s exciting expedition toward Witch Hazel Hill in Jezero Crater. This episode features exclusive Mars photos, current Martian weather readings, and fascinating details about Mars surface conditions that space enthusiasts won’t want to miss. Learn about the latest Mars discoveries as we explore real-time rover updates and the cutting-edge space technology that makes robotic exploration possible. Whether you’re interested in planetary science or simply curious about what’s happening on the Red Planet, this comprehensive Mars exploration update delivers the most recent findings from our mechanical explorers on Mars.

That’s the word from a new set of predictions for the decade ahead issued by Accenture, which highlights how our future is being shaped by AI-powered autonomy. By 2030, agents — not people — will be the “primary users of most enterprises’ internal digital systems,” the study’s co-authors state. By 2032, “interacting with agents surpasses apps in average consumer time spent on smart devices.”

Also: In a machine-led economy, relational intelligence is key to success

This heralds a moment of transition, what the report’s primary author, Accenture CTO Karthik Narain, calls the Binary Big Bang. “When foundation models cracked the natural language barrier,” writes Narain, “they kickstarted a shift in our technology systems: how we design them, use them, and how they operate.”

🐤 Follow Me on Twitter https://twitter.com/TheAiGrid.
🌐 Checkout My website — https://theaigrid.com/

Links From Todays Video:
https://blog.samaltman.com/reflections.

00:00 — Why 2025 is crucial.
00:21 — AI Agents.
01:10 — Digital Employees.
02:20 — Klarna’s AI
03:39 — 11 Labs AI
05:00 — Workflow Agents.
06:08 — AI Deployment.
06:48 — Google’s 2026 Vision.
07:30 — Agent Reliability.
08:48 — Anthropic’s Agents.
09:44 — Current Benchmarks.
10:24 — GPT Improvements.
11:23 — Microsoft AI Teams.
12:01 — Sam Altman’s 2025 Vision.
12:31 — Superintelligence.
13:24 — OpenAI’s Tools.
14:00 — Upcoming AI Agent.

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected].

The mention of gravity and quantum in the same sentence often elicits discomfort from theoretical physicists, yet the effects of gravity on quantum information systems cannot be ignored. In a recently announced collaboration between the University of Connecticut, Google Quantum AI, and the Nordic Institute for Theoretical Physics (NORDITA), researchers explored the interplay of these two domains, quantifying the nontrivial effects of gravity on transmon qubits.

Led by Alexander Balatsky of UConn’s Quantum Initiative, along with Google’s Pedram Roushan and NORDITA researchers Patrick Wong and Joris Schaltegger, the study focuses on the gravitational redshift. This phenomenon slightly detunes the energy levels of qubits based on their position in a gravitational field. While negligible for a single qubit, this effect becomes measurable when scaled.

While quantum computers can effectively be protected from electromagnetic radiation, barring any innovative antigravitic devices expansive enough to hold a quantum computer, quantum technology cannot at this point in time be shielded from the effects of gravity. The team demonstrated that gravitational interactions create a universal dephasing channel, disrupting the coherence required for quantum operations. However, these same interactions could also be used to develop highly sensitive gravitational sensors.

“Our research reveals that the same finely tuned qubits engineered to process information can serve as precise sensors—so sensitive, in fact, that future quantum chips may double as practical gravity sensors. This approach is opening a new frontier in quantum technology.”

To explore these effects, the researchers modeled the gravitational redshift’s impact on energy-level splitting in transmon qubits. Gravitational redshift, a phenomenon predicted by Einstein’s general theory of relativity, occurs when light or electromagnetic waves traveling away from a massive object lose energy and shift to longer wavelengths. This happens because gravity alters the flow of time, causing clocks closer to a massive object to tick more slowly than those farther away.

Historically, gravitational redshift has played a pivotal role in confirming general relativity and is critical to technologies like GPS, where precise timing accounts for gravitational differences between satellites and the Earth’s surface. In this study, the researchers applied the concept to transmon qubits, modeling how gravitational effects subtly shift their energy states depending on their height in a gravitational field.

Using computational simulations and theoretical models, the team was able to quantify these energy-level shifts. While the effects are negligible for individual qubits, they become significant when scaled to arrays of qubits positioned at varying heights on vertically aligned chips, such as Google’s Sycamore chip.

The mention of gravity and quantum in the same sentence often elicits discomfort from theoretical physicists, yet the effects of gravity on quantum information systems cannot be ignored. In a recently announced collaboration between the University of Connecticut, Google Quantum AI, and the Nordic Institute for Theoretical Physics (NORDITA), researchers explored the interplay of these two domains, quantifying the nontrivial effects of gravity on transmon qubits.

Led by Alexander Balatsky of UConn’s Quantum Initiative, along with Google’s Pedram Roushan and NORDITA researchers Patrick Wong and Joris Schaltegger, the study focuses on the gravitational redshift. This phenomenon slightly detunes the energy levels of qubits based on their position in a gravitational field. While negligible for a single qubit, this effect becomes measurable when scaled.

While quantum computers can effectively be protected from electromagnetic radiation, barring any innovative antigravitic devices expansive enough to hold a quantum computer, quantum technology cannot at this point in time be shielded from the effects of gravity. The team demonstrated that gravitational interactions create a universal dephasing channel, disrupting the coherence required for quantum operations. However, these same interactions could also be used to develop highly sensitive gravitational sensors.

“Our research reveals that the same finely tuned qubits engineered to process information can serve as precise sensors—so sensitive, in fact, that future quantum chips may double as practical gravity sensors. This approach is opening a new frontier in quantum technology.”

To explore these effects, the researchers modeled the gravitational redshift’s impact on energy-level splitting in transmon qubits. Gravitational redshift, a phenomenon predicted by Einstein’s general theory of relativity, occurs when light or electromagnetic waves traveling away from a massive object lose energy and shift to longer wavelengths. This happens because gravity alters the flow of time, causing clocks closer to a massive object to tick more slowly than those farther away.

Historically, gravitational redshift has played a pivotal role in confirming general relativity and is critical to technologies like GPS, where precise timing accounts for gravitational differences between satellites and the Earth’s surface. In this study, the researchers applied the concept to transmon qubits, modeling how gravitational effects subtly shift their energy states depending on their height in a gravitational field.

Using computational simulations and theoretical models, the team was able to quantify these energy-level shifts. While the effects are negligible for individual qubits, they become significant when scaled to arrays of qubits positioned at varying heights on vertically aligned chips, such as Google’s Sycamore chip.

The aurora borealis, or northern lights, is known for a stunning spectacle of light in the night sky, but this near-Earth manifestation, which is caused by explosive activity on the sun and carried by the solar wind, can also interrupt vital communications and security infrastructure on Earth. Using artificial intelligence, researchers at the University of New Hampshire have categorized and labeled the largest-ever database of aurora images that could help scientists better understand and forecast the disruptive geomagnetic storms.

The research, recently published in the Journal of Geophysical Research: Machine Learning and Computation, developed artificial intelligence and machine learning tools that were able to successfully identify and classify over 706 million images of auroral phenomena in NASA’s Time History of Events and Macroscale Interactions during Substorms (THEMIS) data set collected by twin spacecrafts studying the space environment around Earth. THEMIS provides images of the night sky every three seconds from sunset to sunrise from 23 different stations across North America.

“The massive dataset is a valuable resource that can help researchers understand how the interacts with the Earth’s magnetosphere, the protective bubble that shields us from charged particles streaming from the sun,” said Jeremiah Johnson, associate professor of applied engineering and sciences and the study’s lead author. “But until now, its huge size limited how effectively we can use that data.”