Toggle light / dark theme

Today, Lawrence Livermore National Lab (LLNL) and IBM announced the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.

The $1 million computer has 16 IBM microprocessors designed to mimic the way the brain works.

IBM says it will be five to seven years before TrueNorth sees widespread commercial use, but the Lawrence Livermore test is a big step in that direction.

Read more

Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for days at a time. Scientists from IBM’s T.J. Watson Research Center think they can cut the horsepower and learning times drastically using “resistive processing units,” theoretical chips that combine CPU and non-volatile memory. Those could accelerate data speeds exponentially, resulting in systems that can do tasks like “natural speech recognition and translation between all world languages,” according to the team.

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google’s DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU.

Read more

Quicker time to discovery. That’s what scientists focused on quantum chemistry are looking for. According to Bert de Jong, Computational Chemistry, Materials and Climate Group Lead, Computational Research Division, Lawrence Berkeley National Lab (LBNL), “I’m a computational chemist working extensively with experimentalists doing interdisciplinary research. To shorten time to scientific discovery, I need to be able to run simulations at near-real-time, or at least overnight, to drive or guide the next experiments.” Changes must be made in the HPC software used in quantum chemistry research to take advantage of advanced HPC systems to meet the research needs of scientists both today and in the future.

NWChem is a widely used open source software computational chemistry package that includes both quantum chemical and molecular dynamics functionality. The NWChem project started around the mid-1990s, and the code was designed from the beginning to take advantage of parallel computer systems. NWChem is actively developed by a consortium of developers and maintained by the Environmental Molecular Sciences Laboratory (EMSL) located at the Pacific Northwest National Laboratory (PNNL) in Washington State. NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

“Rapid evolution of the computational hardware also requires significant effort geared toward the modernization of the code to meet current research needs,” states Karol Kowalski, Capability Lead for NWChem Development at PNNL.

Read more

A few years ago, researchers from Germany and Japan were able to simulate one percent of human brain activity for a single second. It took the processing power of one of the world’s most powerful supercomputers to make that happen.

Hands down, the human brain is by far the most powerful, energy efficient computer ever created.

So what if we could harness the power of the human brain by using actual brain cells to power the next generation of computers?

As crazy as it sounds, that’s exactly what neuroscientist Osh Agabi is building. Koniku, Agabi’s startup, has developed a prototype 64-neuron silicon chip.

Read more

Solving the turbulence plasma mystery.


Cutting-edge simulations run at Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) over a two-year period are helping physicists better understand what influences the behavior of the plasma turbulence that is driven by the intense heating necessary to create fusion energy. This research has yielded exciting answers to long-standing questions about plasma heat loss that have previously stymied efforts to predict the performance of fusion reactors and could help pave the way for this alternative energy source.

The key to making fusion work is to maintain a sufficiently high temperature and density to enable the atoms in the reactor to overcome their mutual repulsion and bind to form helium. But one side effect of this process is turbulence, which can increase the rate of plasma, significantly limiting the resulting energy output. So researchers have been working to pinpoint both what causes the turbulence and how to control or possibly eliminate it.

Because are extremely complex and expensive to design and build, supercomputers have been used for more than 40 years to simulate the conditions to create better reactor designs. NERSC is a Department of Energy Office of Science User Facility that has supported fusion research since 1974.

In what appears at first to be a storyline ripped from a sci-fi thriller, a multi-national research team spread across two continents, four countries, and ten years in the making have created a model of a supercomputer that runs on the same substance that living things use as an energy source.

Humans and virtually all living things rely on Adenosine triphosphate ( ATP ) to provide the energy our cells need to perform daily functions. The biological computer created by the team led by Professor Dan Nicolau, Chair of the Department of Bioengineering at McGill, also relies on ATP for power.

The biological computer is able to process information very quickly and operates accurately using parallel networks like contemporary massive electronic super computers. In addition, the model is lot smaller in size, uses relatively less energy, and functions using proteins that are present in all living cells.

Read more