Toggle light / dark theme

Modern imaging systems, such as those used in smartphones, virtual reality (VR), and augmented reality (AR) devices, are constantly evolving to become more compact, efficient, and high-performing. Traditional optical systems rely on bulky glass lenses, which have limitations like chromatic aberrations, low efficiency at multiple wavelengths, and large physical sizes. These drawbacks present challenges when designing smaller, lighter systems that still produce high-quality images.

MIT CSAIL researchers have developed a generative AI system, LucidSim, to train robots in virtual environments for real-world navigation. Using ChatGPT and physics simulators, robots learn to traverse complex terrains. This method outperforms traditional training, suggesting a new direction for robotic training.


A team of roboticists and engineers at MIT CSAIL, Institute for AI and Fundamental Interactions, has developed a generative AI approach to teaching robots how to traverse terrain and move around objects in the real world.

The group has published a paper describing their work and possible uses for it on the arXiv preprint server. They also presented their ideas at the recent Conference on Robot Learning (CORL 2024), held in Munich Nov. 6–9.

Getting robots to navigate in the real world at some point involves teaching them to learn on the fly, or by training them with videos of similar robots in a real-world environment. While such training has proven to be effective in limited environments, it tends to fail when a robot encounters something novel. In this new effort, the team at MIT developed virtual training that better translates to the real world.

Artificial Intelligence is everywhere in Europe.

While some are worried about its long-term impact, a team of researchers at the University of Technology in Vienna is working on responsible ways to use AI.

Watch more 👉


From industry to healthcare to the media and even the creative arts, artificial intelligence is already having an impact on our daily lives. It’s hailed by advocates as a gift to humanity, but others worry about the long-term effects on society.

Developed and fiercely defended by some, criticised if not openly feared by others, the phrase on everyone’s lips, AI, Artificial Intelligence, generates passionate hopes but also widespread concerns throughout the European Union. Who are the potential winners, and who are the potential losers of this new digital revolution in the making? We travelled to Austria and Estonia to try to find out.

Around 3/4 of European employees have already had practical experience with AI. Artificial Intelligence already develops new virtual reality tools. It helps transcribe medieval manuscripts. It contributes to the design of autonomous vehicles, or futuristic buildings. But its use is also raising concerns in schools and universities, while workers and trade unions fear its effect on certain job categories.

Shaking hands with a character from the Fortnite video game. Visualizing a patient’s heart in 3D—and “feeling” it beat. Touching the walls of the Roman Coliseum—from your sofa in Los Angeles. What if we could touch and interact with things that aren’t physically in front of us? This reality might be closer than we think, thanks to an emerging technology: the holodeck.

The name might sound familiar. In Star Trek’s Next Generation, a holodeck was an advanced 3D virtual reality world that created the illusion of solid objects. Now, immersive technology researchers at USC and beyond are taking us one step closer to making this science fiction concept a science fact.

On Dec. 15, USC hosted the first International Conference on Holodecks. Organized by Shahram Ghandeharizadeh, a USC associate professor of computer science, the conference featured keynotes, papers and presentations from researchers at USC, Brown University, UCLA, University of Colorado, Stanford University, New Jersey Institute of Technology, UC-Riverside, and haptic technology company UltraLeap.

Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.

The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.

The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.

Adeno-associated virus (AAV) is a well-known gene delivery tool with a wide range of applications, including as a vector for gene therapies. However, the molecular mechanism of its cell entry remains unknown. Here, we performed coarse-grained molecular dynamics simulations of the AAV serotype 2 (AAV2) capsid and the universal AAV receptor (AAVR) in a model plasma membrane environment. Our simulations show that binding of the AAV2 capsid to the membrane induces membrane curvature, along with the recruitment and clustering of GM3 lipids around the AAV2 capsid. We also found that the AAVR binds to the AAV2 capsid at the VR-I loops using its PKD2 and PKD3 domains, whose binding poses differs from previous structural studies. These first molecular-level insights into AAV2 membrane interactions suggest a complex process during the initial phase of AAV2 capsid internalization.

Disney is adding another layer to its AI and extended reality strategies. As first reported by Reuters, the company recently formed a dedicated emerging technologies unit. Dubbed the Office of Technology Enablement, the group will coordinate the company’s exploration, adoption and use of artificial intelligence, AR and VR tech.

It has tapped Jamie Voris, previously the CTO of its Studios Technology division, to oversee the effort. Before joining Disney in 2010, Voris was the chief technology officer at the National Football League. More recently, he led the development of the company’s Apple Vision Pro app. Voris will report to Alan Bergman, the co-chairman of Disney Entertainment. Reuters reports the company eventually plans to grow the group to about 100 employees.

“The pace and scope of advances in AI and XR are profound and will continue to impact consumer experiences, creative endeavors, and our business for years to come — making it critical that Disney explore the exciting opportunities and navigate the potential risks,” Bergman wrote in an email Disney shared with Engadget. “The creation of this new group underscores our dedication to doing that and to being a positive force in shaping responsible use and best practices.”

Researchers have developed a new type of bifocal lens that offers a simple way to achieve two foci (or spots) with intensities that can be adjusted by applying external voltage. The lenses, which use two layers of liquid crystal structures, could be useful for various applications such as optical interconnections, biological imaging, augmented/virtual reality devices and optical computing.

A virtual haptic implementation technology that allows all users to experience the same tactile sensation has been developed. A research team led by Professor Park Jang-Ung from the Center for Nanomedicine within the Institute for Basic Science (IBS) and Professor Jung Hyun Ho from Severance Hospital’s Department of Neurosurgery has developed a technology that provides consistent tactile sensations on displays.

This research was conducted in collaboration with colleagues from Yonsei University Severance Hospital. It was published in Nature Communications on August 21, 2024.

Virtual haptic implementation technology, also known as tactile rendering technology, refers to the methods and systems that simulate the sense of touch in a . This technology aims to create the sensation of physical contact with virtual objects, enabling users to feel textures, shapes, and forces as if they were interacting with real-world items, even though the objects are digital.

Researchers at the University of Toronto have found that using virtual and augmented reality (VR and AR) can temporarily change the way people perceive and interact with the real world—with potential implications for the growing number of industries that use these technologies for training purposes.

The study, published recently in the journal Scientific Reports, not only found that people moved differently in VR and AR, but that these changes led to temporary errors in movement in the real world. In particular, participants who used VR tended to undershoot their targets by not reaching far enough, while those who used AR tended to overshoot their targets by reaching too far.

This effect was noticeable immediately after using VR or AR, but gradually disappeared as participants readjusted to .