Toggle light / dark theme

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

Remember Deepmind’s AlphaGo? It represented a fundamental breakthrough in AI development, because it was one of the first game-playing AIs that took no human instruction and read no rules.

Instead, it used a technique called self-play reinforcement learning to build up its own understanding of the game. Pure trial and error across millions, even billions of virtual games, starting out more or less randomly pulling whatever levers were available, and attempting to learn from the results.

We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: an RL-agent learns to play the game and the training sessions are recorded, and a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.

In a study published in Cell Reports Physical Science (“Electro-Active Polymer Hydrogels Exhibit Emergent Memory When Embodied in a Simulated Game-Environment”), a team led by Dr Yoshikatsu Hayashi demonstrated that a simple hydrogel — a type of soft, flexible material — can learn to play the simple 1970s computer game ‘Pong’. The hydrogel, interfaced with a computer simulation of the classic game via a custom-built multi-electrode array, showed improved performance over time.

Dr Hayashi, a biomedical engineer at the University of Reading’s School of Biological Sciences, said: Our research shows that even very simple materials can exhibit complex, adaptive behaviours typically associated with living systems or sophisticated AI.

This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.

While large language models (LLMs) have demonstrated remarkable capabilities in extracting data and generating connected responses, there are real questions about how these artificial intelligence (AI) models reach their answers. At stake are the potential for unwanted bias or the generation of nonsensical or inaccurate “hallucinations,” both of which can lead to false data.

That’s why SMU researchers Corey Clark and Steph Buongiorno are presenting a paper at the upcoming IEEE Conference on Games, scheduled for August 5–8 in Milan, Italy. They will share their creation of a GAME-KG framework, which stands for “Gaming for Augmenting Metadata and Enhancing Knowledge Graphs.”

The research is published on the arXiv preprint server.