Toggle light / dark theme

Chinese scientists confirm AI capable of spontaneously forming human-level cognition

Can artificial intelligence (AI) recognize and understand things like human beings? Chinese scientific teams, by analyzing behavioral experiments with neuroimaging, have for the first time confirmed that multimodal large language models (LLM) based on AI technology can spontaneously form an object concept representation system highly similar to that of humans. To put it simply, AI can spontaneously develop human-level cognition, according to the scientists.

The study was conducted by research teams from Institute of Automation, Chinese Academy of Sciences (CAS); Institute of Neuroscience, CAS, and other collaborators.

The research paper was published online on Nature Machine Intelligence on June 9. The paper states that the findings advance the understanding of machine intelligence and inform the development of more human-like artificial cognitive systems.

Introducing the V-JEPA 2 world model and new benchmarks for physical reasoning

Today, we’re excited to share V-JEPA 2, the first world model trained on video that enables state-of-the-art understanding and prediction, as well as zero-shot planning and robot control in new environments. As we work toward our goal of achieving advanced machine intelligence (AMI), it will be important that we have AI systems that can learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us.

V-JEPA 2 is a 1.2 billion-parameter model that was built using Meta Joint Embedding Predictive Architecture (JEPA), which we first shared in 2022. Our previous work has shown that JEPA performs well for modalities like images and 3D point clouds. Building on V-JEPA, our first model trained on video that we released last year, V-JEPA 2 improves action prediction and world modeling capabilities that enable robots to interact with unfamiliar objects and environments to complete a task. We’re also sharing three new benchmarks to help the research community evaluate how well their existing models learn and reason about the world using video. By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress—ultimately leading to better and more capable AI systems that will help enhance people’s lives.

AI-Driven Robots Are Rewriting The Factory Rulebook

Planning for a future of intelligent robots means thinking about how they might transform your industry, what it means for the future of work, and how it may change the relationship between humans and technology.

Leaders must consider the ethical issues of cognitive manufacturing such as job disruption and displacement, accountability when things go wrong, and the use of surveillance technology when, for example, robots use cameras working alongside humans.

The cognitive industrial revolution, like the industrial revolutions before it, will transform almost every aspect of our world, and change will happen faster and sooner than most expect. Consider for a moment, what will it take for each of us and our organizations to be ready for this future?

Engineers introduce human-like driving technology for autonomous vehicles

Self-driving cars will soon be able to “think” like human drivers under complex traffic environments, thanks to a cognitive encoding framework built by a multidisciplinary research team from the School of Engineering at the Hong Kong University of Science and Technology (HKUST).

This innovation significantly enhances the safety of autonomous vehicles (AVs), reducing overall traffic risk by 26.3% and cutting potential harm to high-risk such as pedestrians and cyclists by an impressive 51.7%. Even the AVs themselves benefited, with their risk levels lowered by 8.3%, paving the way for a new framework to advance the automation of vehicle safety.

Existing AVs have one common limitation: their decision-making systems can only make pairwise risk assessments, failing to holistically consider interactions among multiple road users. This contrasts with a proficient driver who, for example, can skillfully navigate an intersection by prioritizing pedestrian protection while slightly compromising the safety of nearby vehicles. Once pedestrians are confirmed to be safe, the driver can then shift focus to nearby vehicles. Such risk management ability exhibited by humans is known as “social sensitivity.”

‘Optical neural engine’ can solve partial differential equations

Partial differential equations (PDEs) are a class of mathematical problems that represent the interplay of multiple variables, and therefore have predictive power when it comes to complex physical systems. Solving these equations is a perpetual challenge, however, and current computational techniques for doing so are time-consuming and expensive.

Now, research from the University of Utah’s John and Marcia Price College of Engineering is showing a way to speed up this process: encoding those equations in light and feeding them into their newly designed “optical neural engine,” or ONE.

The researchers’ ONE combines diffractive optical neural networks and optical matrix multipliers. Rather than representing PDEs digitally, the researchers represented them optically, with variables represented by the various properties of a light wave, such as its intensity and phase. As a wave passes through the ONE’s series of optical components, those properties gradually shift and change, until they ultimately represent the solution to the given PDE.

‘Link-bots’ can move, explore, cooperate without sensing or computation

Coordinated behaviors like swarming—from ant colonies to schools of fish—are found everywhere in nature. Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have given a nod to nature with a next-generation robot system that’s capable of movement, exploration, transport and cooperation.

A study in Science Advances describing the new soft robotic system was co-led by L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Physics, and Organismic and Evolutionary Biology in SEAS and the Faculty of Arts and Sciences, in collaboration with Professor Ho-Young Kim at Seoul National University. Their work paves new directions for future, low-power swarm robotics.

The new robots, called link-bots, are comprised of centimeter-scale, 3D-printed particles strung into V-shaped chains via notched links and are capable of coordinated, life-like movements without any embedded power or control systems. Each particle’s legs are tilted to allow the bot to self-propel when placed on a uniformly vibrating surface.

Machine learning helps ease the jitters of high-power lasers

Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have made a breakthrough in laser technology by using machine learning (ML) to help stabilize a high-power laser.

This advancement, spearheaded by Berkeley Lab’s Accelerator Technology & Applied Physics (ATAP) and Engineering Divisions, promises to accelerate progress in physics, medicine, and energy. The researchers report their work in the journal High Power Laser Science and Engineering.

/* */