Toggle light / dark theme

Waymo has expanded its robotaxi service to the general public in Los Angeles, allowing anyone with the Waymo One app to request a ride. This marks a significant step in autonomous vehicle technology, as Waymo continues to lead the industry with over 50,000 weekly passengers and a strong safety record.


Waymo on Tuesday opened its robotaxi service to anyone who wants a ride around Los Angeles, marking another milestone in the evolution of self-driving car technology since the company began as a secret project at Google 15 years ago.

The expansion comes eight months after Waymo began offering rides in Los Angeles to a limited group of passengers chosen from a waiting list that had ballooned to more than 300,000 people. Now, anyone with the Waymo One smartphone app will be able to request a ride around an 80-square-mile (129-square-kilometer) territory spanning the second largest U.S. city.

After Waymo received approval from California regulators to charge for rides 15 months ago, the company initially chose to launch its operations in San Francisco before offering a limited service in Los Angeles.

Researchers at Penn Engineering have developed PanoRadar, a system that uses radio waves and AI to provide robots with detailed 3D environmental views, even in challenging conditions like smoke and fog. This innovation offers a cost-effective alternative to LiDAR, enhancing robotic navigation and perception capabilities.


In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog.

However, nature has shown that vision doesn’t have to be constrained by light’s limitations—many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey’s movements.

While labeled a “robot wolf” by its designers, this platform presents itself as a powerful tactical tool likely aimed at military or security applications, where its design and capabilities stand to offer significant operational value.

The robot’s four-legged design is an immediate indicator of its…


At China’s Zhuhai Air Show, a new robotic quadruped known as robot-wolf stole the spotlight demonstrating its capability to respond to real-time voice commands.

Author(s): Jesus Rodriguez Originally published on Towards AI. Created Using IdeogramI recently started an AI-focused educational newsletter, that already has over 170,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

Google DeepMind has unexpectedly released the source code and model weights of AlphaFold 3 for academic use, marking a significant advance that could accelerate scientific discovery and drug development. The surprise announcement comes just weeks after the system’s creators, Demis Hassabis and John Jumper, were awarded the 2024 Nobel Prize in Chemistry for their work on protein structure prediction.

AlphaFold 3 represents a quantum leap beyond its predecessors. While AlphaFold 2 could predict protein structures, version 3 can model the complex interactions between proteins, DNA, RNA, and small molecules — the fundamental processes of life. This matters because understanding these molecular interactions drives modern drug discovery and disease treatment. Traditional methods of studying these interactions often require months of laboratory work and millions in research funding — with no guarantee of success.

The system’s ability to predict how proteins interact with DNA, RNA, and small molecules transforms it from a specialized tool into a comprehensive solution for studying molecular biology. This broader capability opens new paths for understanding cellular processes, from gene regulation to drug metabolism, at a scale previously out of reach.

Nvidia CEO Jensen Huang highlights the transformative impact of AI and accelerated computing on various industries, emphasizing rapid growth, enhanced productivity, and the evolution of software development through innovations like the Omniverse and advanced GPUs Questions to inspire discussion Physical AI and AGI 🤖