Toggle light / dark theme

DeepMind’s FunSearch discovers new mathematical knowledge and algorithms.


Google DeepMind has triumphantly cracked an age-old mathematical mystery using a method called FunSearch.

The math problem that FunSearch has solved is the famous cap set problem in pure mathematics, which has stumped even the brightest human mathematicians.

US Customs and Border Protection (CBP) recently awarded Pangiam, a leading trade and travel technology company, a prime contract for developing and implementing Anomaly Detection Algorithms (ADA).

Pangiam, in collaboration with West Virginia University, aims to bring cutting-edge artificial intelligence (AI), computer vision, and machine learning expertise to enhance CBP’s border and national security missions, the company announced in a press release.

Tiny made from human windpipe cells encouraged damaged neural tissue to repair itself in a lab experiment — potentially foreshadowing a future in which creations like this patrol our bodies, healing damage, delivering drugs, and more.

The background: In a study published in 2020, researchers at Tufts University and the University of Vermont (UVM) harvested and incubated skin cells from frog embryos until they were tiny balls.

They then sculpted the spheres into specific shapes — dictated by an algorithm — and added layers of cardiac stem cells to them in precise locations.

Is it possible to invent a computer that computes anything in a flash? Or could some problems stump even the most powerful of computers? How complex is too complex for computation? The question of how hard a problem is to solve lies at the heart of an important field of computer science called computational complexity. Computational complexity theorists want to know which problems are practically solvable using clever algorithms and which problems are truly difficult, maybe even virtually impossible, for computers to crack. This hardness is central to what’s called the P versus NP problem, one of the most difficult and important questions in all of math and science.

This video covers a wide range of topics including: the history of computer science, how transistor-based electronic computers solve problems using Boolean logical operations and algorithms, what is a Turing Machine, the different classes of problems, circuit complexity, and the emerging field of meta-complexity, where researchers study the self-referential nature of complexity questions.

Featuring computer scientist Scott Aaronson (full disclosure, he is also member of the Quanta Magazine Board). Check out his blog: https://scottaaronson.blog/

Read the companion article about meta-complexity at Quanta Magazine: https://www.quantamagazine.org/complexity-theorys-50-year-jo…-20230817/

📸 Look at this post on Facebook https://www.facebook.com/share/U5sBEHBUhndiJJDz/?mibextid=xfxF2i


In the realm of computing technology, there is nothing quite as powerful and complex as the human brain. With its 86 billion neurons and up to a quadrillion synapses, the brain has unparalleled capabilities for processing information. Unlike traditional computing devices with physically separated units, the brain’s efficiency lies in its ability to serve as both a processor and memory device. Recognizing the potential of harnessing the brain’s power, researchers have been striving to create more brain-like computing systems.

Efforts to mimic the brain’s activity in artificial systems have been ongoing, but progress has been limited. Even one of the most powerful supercomputers in the world, Riken’s K Computer, struggled to simulate just a fraction of the brain’s activity. With its 82,944 processors and a petabyte of main memory, it took 40 minutes to simulate just one second of the activity of 1.73 billion neurons connected by 10.4 trillion synapses. This represented only one to two percent of the brain’s capacity.

In recent years, scientists and engineers have delved into the realm of neuromorphic computing, which aims to replicate the brain’s structure and functionality. By designing hardware and algorithms that mimic the brain, researchers hope to overcome the limitations of traditional computing and improve energy efficiency. However, despite significant progress, neuromorphic computing still poses challenges, such as high energy consumption and time-consuming training of artificial neural networks.

In recent years, roboticists and computer scientists have introduced various new computational tools that could improve interactions between robots and humans in real-world settings. The overreaching goal of these tools is to make robots more responsive and attuned to the users they are assisting, which could in turn facilitate their widespread adoption.

Researchers at Leonardo Labs and the Italian Institute of Technology (IIT) in Italy recently introduced a new computational framework that allows robots to recognize specific users and follow them around within a given environment. This framework, introduced in a paper published as part of the 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), allows robots re-identify users in their surroundings, while also performing specific actions in response to performed by the users.

“We aimed to create a ground-breaking demonstration to attract stakeholders to our laboratories,” Federico Rollo, one of the researchers who carried out the study, told Tech Xplore. “The Person-Following robot is a prevalent application found in many commercial mobile robots, especially in industrial environments or for assisting individuals. Typically, such algorithms use external Bluetooth or Wi-Fi emitters, which can interfere with other sensors and the user is required to carry.”

In the ever-evolving landscape of artificial intelligence, a seismic shift is unfolding at OpenAI, and it involves more than just lines of code. The reported ‘superintelligence’ breakthrough has sent shockwaves through the company, pushing the boundaries of what we thought was possible and raising questions that extend far beyond the realm of algorithms.

Imagine a breakthrough so monumental that it threatens to dismantle the very fabric of the company that achieved it. OpenAI, the trailblazer in artificial intelligence, finds itself at a crossroads, dealing not only with technological advancement but also with the profound ethical and existential implications of its own creation – ‘superintelligence.’

The Breakthrough that Nearly Broke OpenAI: The Information’s revelation about a Generative AI breakthrough, capable of unleashing ‘superintelligence’ within this decade, sheds light on the internal disruption at OpenAI. Spearheaded by Chief Scientist Ilya Sutskever, the breakthrough challenges conventional AI training, allowing machines to solve problems they’ve never encountered by reasoning with cleaner and computer-generated data.

EPFL researchers have developed an algorithm to train an analog neural network just as accurately as a digital one, enabling the development of more efficient alternatives to power-hungry deep learning hardware.

With their ability to process vast amounts of data through algorithmic ‘learning’ rather than traditional programming, it often seems like the potential of deep neural networks like Chat-GPT is limitless. But as the scope and impact of these systems have grown, so have their size, complexity, and —the latter of which is significant enough to raise concerns about contributions to global carbon emissions.

While we often think of in terms of shifting from analog to digital, researchers are now looking for answers to this problem in physical alternatives to digital deep neural networks. One such researcher is Romain Fleury of EPFL’s Laboratory of Wave Engineering in the School of Engineering.