Toggle light / dark theme

Artificial intelligence researchers claim to have made the world’s first genuine scientific discovery using a large language model (LLM), which is behind ChatGPT and similar programs. This signals a major breakthrough.

The discovery was made by Google DeepMind, an AI research laboratory where scientists are investigating whether LLMs can do more than just repackage information learned in training and actually generate new insights.

It turns out that they can, and the implications are potentially huge. DeepMind said in a blog post that its FunSearch, a method to search for new solutions in mathematics and computer science, made “the first discoveries in open problems in mathematical sciences using LLMs.”

No one knows who might get there first. The United States and China are considered the leaders in the field; many experts believe America still holds an edge.

As the race to master quantum computing continues, a scramble is on to protect critical data. Washington and its allies are working on new encryption standards known as post-quantum cryptography – essentially codes that are much harder to crack, even for a quantum computer. Beijing is trying to pioneer quantum communications networks, a technology theoretically impossible to hack, according to researchers. The scientist spearheading Beijing’s efforts has become a minor celebrity in China.

Quantum computing is radically different. Conventional computers process information as bits – either 1 or 0, and just one number at a time. Quantum computers process in quantum bits, or “qubits,” which can be 1, 0 or any number in between, all at the same time, which physicists say is an approximate way of describing a complex mathematical concept.

The teacher shortage crisis is a major concern, casting a shadow on educational quality across the globe. In this academic climate, the rise of AI in the classroom sparks both hope and skepticism. Alpha school is leading the way, devoid of traditional teachers and reliant on its AI-powered curriculum and “guide” system. This innovative approach offers a glimpse of a promising future where technology and human ingenuity merge to redefine education.

AI has become a game-changer in education by customizing learning experiences according to students’ individual learning styles and paces. Alpha’s app-based tutoring system is a prime example of this. It is personalized for each student’s strengths and weaknesses, a significant departure from the traditional “one-size-fits-all” classroom approach. For instance, consider a child who struggles with math concepts. AI can modify the exercises and explanations to suit their learning style, enabling them to understand the material better.

Moreover, this AI-driven education system offers instant and detailed feedback, which may be lacking in some schools. Such immediate response fosters a deeper understanding and encourages a more engaged learning process. This level of individualized attention is a powerful tool for enhancing knowledge and engagement.

DeepMind’s FunSearch discovers new mathematical knowledge and algorithms.


Google DeepMind has triumphantly cracked an age-old mathematical mystery using a method called FunSearch.

The math problem that FunSearch has solved is the famous cap set problem in pure mathematics, which has stumped even the brightest human mathematicians.

For the first time ever, researchers show how a large language model can help discover novel solutions to long-standing problems in math and computer science.


The card game Set has long inspired mathematicians to create interesting problems.

Now, a technique based on large language models (LLMs) is showing that artificial intelligence (AI) can help mathematicians to generate new solutions.

The AI system, called FunSearch, made progress on Set-inspired problems in combinatorics, a field of mathematics that studies how to count the possible arrangements of sets containing finitely many objects. But its inventors say that the method, described in Nature on 14 December1, could be applied to a variety of questions in maths and computer science.

There is no computer even remotely as powerful and complex as the human brain. The lumps of tissue ensconced in our skulls can process information at quantities and speeds that computing technology can barely touch.

Key to the brain’s success is the neuron’s efficiency in serving as both a processor and memory device, in contrast to the physically separated units in most modern computing devices.

There have been many attempts to make computing more brain-like, but a new effort takes it all a step further – by integrating real, actual, human brain tissue with electronics.

The mini-brain functioned like both the central processing unit and memory storage of a supercomputer. It received input in the form of electrical zaps and outputted its calculations through neural activity, which was subsequently decoded by an AI tool.

When trained on soundbites from a pool of people—transformed into electrical zaps—Brainoware eventually learned to pick out the “sounds” of specific people. In another test, the system successfully tackled a complex math problem that’s challenging for AI.

The system’s ability to learn stemmed from changes to neural network connections in the mini-brain—which is similar to how our brains learn every day. Although just a first step, Brainoware paves the way for increasingly sophisticated hybrid biocomputers that could lower energy costs and speed up computation.

“In the study, we demonstrate how artificial intelligence can be used to carry out fundamental theoretical physics that addresses the behavior of fluids and other complex soft matter systems,” says Prof. Dr. Matthias Schmidt, chair of Theoretical Physics II at the University of Bayreuth.


Scientists from Bayreuth have developed a new method for studying liquid and soft matter using artificial intelligence. In a study now published in the Proceedings of the National Academy of Sciences, they open up a new chapter in density functional theory.

We live in a highly technologized world where basic research is the engine of innovation, in a dense and complex web of interrelationships and interdependencies. The published research provides new methods that can have a great influence on widespread simulation techniques, so that complex substances can be investigated on computers more quickly, more precisely and more deeply.

In the future, this could have an influence on product and process design. The fact that the structure of liquids can be excellently represented by the newly formulated neural mathematical relationships is a major breakthrough that opens up a range of possibilities for gaining deep physical insights.