Toggle light / dark theme

Scientists create a “time crystal” using giant atoms, a concept long thought to be impossible

Recent studies have already used Rydberg vapors to detect radio‑frequency fields with extreme sensitivity.

Persistent, phase‑locked oscillations promise low‑phase‑noise signals useful for clock recovery, precision spectroscopy, and perhaps gravitational‑wave detection, where any self‑referencing oscillator could serve as a phase tag.

On the theory side, researchers now have a platform for mapping phase diagrams that include stationary, bistable, and time‑crystalline regimes.

Tesla FSD Competitors Admit DEFEAT: “Elon Was Right”

Questions to inspire discussion.

Safety and Performance.

🛡️ Q: How does Tesla’s full self-driving system compare to human driving in terms of safety? A: According to Elon Musk, Tesla’s end-to-end neural networks trained on massive video datasets have been proven to be dramatically safer than average human driving.

⚡ Q: What recent hardware upgrade has improved Tesla’s full self-driving capabilities? A: Tesla’s AI4 hardware has been upgraded to 150–200 watts, enabling more complex neural networks and faster decision-making, achieving 36 frames per second processing.

Scalability and Efficiency.

📈 Q: Why is Tesla’s vision-only approach considered more scalable than competitors’ methods? A: Tesla’s vision-only approach is more scalable than competitors’ use of multiple sensors, sensor fusion, and high-definition maps, as stated by BU’s Robin Lee.

MIT’s new AI can teach itself to control robots by watching the world through their eyes — it only needs a single camera

This framework is made up of two key components. The first is a deep-learning model that essentially allows the robot to determine where it and its appendages are in 3-dimensional space. This allows it to predict how its position will change as specific movement commands are executed. The second is a machine-learning program that translates generic movement commands into code a robot can understand and execute.

The team tested the new training and control paradigm by benchmarking its effectiveness against traditional camera-based control methods. The Jacobian field solution surpassed those existing 2D control systems in accuracy — especially when the team introduced visual occlusion that caused the older methods to enter a fail state. Machines using the team’s method, however, successfully created navigable 3D maps even when scenes were partially occluded with random clutter.

Once the scientists developed the framework, it was then applied to various robots with widely varying architectures. The end result was a control program that requires no further human intervention to train and operate robots using only a single video camera.

Can AI really code? Study maps the roadblocks to autonomous software engineering

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach.

Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges.

Titled “Challenges and Paths Towards AI for Software Engineering,” the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. The paper is available on the arXiv preprint server, and the researchers are presenting their work at the International Conference on Machine Learning (ICML 2025) in Vancouver.

Researchers demonstrate room-temperature lasing in photonic-crystal surface-emitting laser

In a first for the field, researchers from The Grainger College of Engineering at the University of Illinois Urbana-Champaign have reported a photopumped lasing from a buried dielectric photonic-crystal surface-emitting laser emitting at room temperature and an eye-safe wavelength. Their findings, published in IEEE Photonics Journal, improve upon current laser design and open new avenues for defense applications.

For decades, the lab of Kent Choquette, professor of electrical and computer engineering, has explored VCSELs, a type of surface-emitting laser used in common technology like smartphones, laser printers, barcode scanners, and even vehicles. But in early 2020, the Choquette lab became interested in groundbreaking research from a Japanese group that introduced a new type of laser called photonic-crystal surface-emitting lasers, or PCSELs.

PCSELs are a newer field of semiconductor lasers that use a photonic crystal layer to produce a with highly desirable characteristics such as high brightness and narrow, round spot sizes. This type of laser is useful for defense applications such as LiDAR, a remote sensing technology used in battlefield mapping, navigation, and target tracking. With funding from the Air Force Research Laboratory, Choquette’s group wanted to examine this new technology and make their own advancements in the growing field.

Stoichiometric crystal shows promise in quantum memory

For over two decades, physicists have been working toward implementing quantum light storage—also known as quantum memory—in various matter systems. These techniques allow for the controlled and reversible mapping of light particles called photons onto long-lived states of matter. But storing light for long periods without compromising its retrieval efficiency is a difficult task.

NASA’s SPHEREx Is Mapping the Infrared Universe in 102 Colors — And It’s All Public

SPHEREx is scanning the entire sky in 102 infrared colors, beaming weekly data to a public archive so scientists and citizen stargazers alike can trace water, organics, and the universe’s first moments while NASA’s open-science philosophy turbo-charges discovery. NASA’s newest space telescope, SPHE

AI Maps the Mood of Your City — And It’s Surprisingly Accurate

What if a city’s mood could be mapped like weather? Researchers at the University of Missouri are using AI to do exactly that—by analyzing geotagged Instagram posts and pairing them with Google Street View images, they’re building emotional maps of urban spaces.

These “sentiment maps” reveal how people feel in specific locations, helping city planners design areas that not only function better but also feel better. With potential applications ranging from safety to disaster response, this human-centered tech could soon become part of the city’s real-time dashboard.

Human-Centric City Vision

Growing evidence for evolving dark energy could inspire a new model of the universe

The birth, growth and future of our universe are eternally fascinating.

In the last decades, telescopes have been able to observe the skies with unprecedented precision and sensitivity.

Our research team on the South Pole Telescope is studying how the universe evolved and has changed over time. We have just released two years’ worth of mapping of the infant universe over 1/25th of the sky.

A machine-learning–powered spectral-dominant multimodal soft wearable system for long-term and early-stage diagnosis of plant stresses

MapS-Wear, a soft plant wearable, enables precise, in situ, and early-stage stress diagnosis to boost crop yield and quality.