The operation of a quantum computer relies on encoding and processing information in the form of quantum bits—defined by two states of quantum systems such as electrons and photons. Unlike binary bits used in classical computers, quantum bits can exist in a combination of zero and one simultaneously—in principle allowing them to perform certain calculations exponentially faster than today’s largest supercomputers.
Category: supercomputing – Page 8
Even the best AI large language models (LLMs) fail dramatically when it comes to simple logical questions. This is the conclusion of researchers from the Jülich Supercomputing Center (JSC), the School of Electrical and Electronic Engineering at the University of Bristol and the LAION AI laboratory.
In their paper posted to the arXiv preprint server, titled “Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models,” the scientists attest to a “dramatic breakdown of function and reasoning capabilities” in the tested state-of-the-art LLMs and suggest that although language models have the latent ability to perform basic reasoning, they cannot access it robustly and consistently.
The authors of the study—Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti and Jenia Jitsev—call on “the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of the current generation of LLMs.” They also call for the development of standardized benchmarks to uncover weaknesses in language models related to basic reasoning capabilities, as current tests have apparently failed to reveal this serious failure.
Researchers from the RIKEN Center for Computational Science (Japan) and the Max Planck Institute for Evolutionary Biology (Germany) have published new findings on how social norms evolve over time. They simulated how norms promote different social behavior, and how the norms themselves come and go. Because of the enormous number of possible norms, these simulations were run on RIKEN’s Fugaku, one of the fastest supercomputers worldwide.
Scientists hope to accelerate the development of human-level AI using a network of powerful supercomputers — with the first of these machines fully operational by 2025.
Using supercomputers and satellite imagery, the researchers showed our planet breathing.
Researchers develop energy-efficient supercomputing with neural networks and charge density waves.
Researchers are creating efficient systems using neural networks and charge density waves to reduce supercomputing’s massive energy use.
As we have alluded to numerous times when talking about the next “AI” trade, data centers will be the “factories of the future” when it comes to the age of AI.
That’s the contention of Chris Miller, the author of Chip War, who penned a recent opinion column for Financial Times noting that ‘chip wars’ could very soon become ‘cloud wars’
He points out that the strategic use of high-powered computing dates back to the Cold War when the US allowed the USSR limited access to supercomputers for weather forecasting, not nuclear simulations.
Today’s supercomputers consume vast amounts of energy, equivalent to the power usage of thousands of homes. In response, researchers are developing a more energy-efficient form of next-generation supercomputing that leverages artificial neural networks.
Neuromorphic computers are devices that try to achieve reasoning capability by emulating a human brain. They are a different type of computer architecture that copies the physical characteristics and design principles of biological nervous systems. Although neuromorphic computations can be emulated, it’s very inefficient for classical computers to simulate. Typically new hardware is required.
The first neuromorphic computer at the scale of a full human brain is about to come online. It’s called DeepSouth, and will be finished in April 2024 at Western Sydney University. This computer should enable new research into how our brain actually functions, potentially leading to breakthroughs in how AI is created.
One important characteristic of this neuromorphic computer is that it’s constructed out of commodity hardware. Specifically, it’s built on top of FPGAs. This means it will be much easier for other organizations to copy the design. It also means that once AI starts self-improving, it can probably build new iterations of hardware quite easily. Instead of having to build factories from the ground up, leveraging existing digital technology allows all the existing infrastructure to be reused. This might have implications for how quickly we develop AGI, and how quickly superintelligence arises.
#ai #neuromorphic #computing.