Blog

Archive for the ‘supercomputing’ category: Page 81

Apr 28, 2016

Exploring phosphorene, a promising new material

Posted by in categories: particle physics, supercomputing

RPI’s new material takes semiconducting transistors to new levels.


Two-dimensional phosphane, a material known as phosphorene, has potential application as a material for semiconducting transistors in ever faster and more powerful computers. But there’s a hitch. Many of the useful properties of this material, like its ability to conduct electrons, are anisotropic, meaning they vary depending on the orientation of the crystal. Now, a team including researchers at Rensselaer Polytechnic Institute (RPI) has developed a new method to quickly and accurately determine that orientation using the interactions between light and electrons within phosphorene and other atoms-thick crystals of black phosphorus. Phosphorene—a single layer of phosphorous atoms—was isolated for the first time in 2014, allowing physicists to begin exploring its properties experimentally and theoretically. Vincent Meunier, head of the Rensselaer Department of Physics, Applied Physics, and Astronomy and a leader of the team that developed the new method, published his first paper on the material—confirming the structure of phosphorene—in that same year.

“This is a really interesting material because, depending on which direction you do things, you have completely different properties,” said Meunier, a member of the Rensselaer Center for Materials, Devices, and Integrated Systems (cMDIS). “But because it’s such a new material, it’s essential that we begin to understand and predict its intrinsic properties.”

Continue reading “Exploring phosphorene, a promising new material” »

Apr 27, 2016

Troubled Times Ahead for Supercomputers

Posted by in categories: biotech/medical, information science, military, supercomputing

Supercomputer facing problems?


In the world of High Performance Computing (HPC), supercomputers represent the peak of capability, with performance measured in petaFLOPs (1015 operations per second). They play a key role in climate research, drug research, oil and gas exploration, cryptanalysis, and nuclear weapons development. But after decades of steady improvement, changes are coming as old technologies start to run into fundamental problems.

When you’re talking about supercomputers, a good place to start is the TOP500 list. Published twice a year, it ranks the world’s fastest machines based on their performance on the Linpack benchmark, which solves a dense system of linear equations using double precision (64 bit) arithmetic.

Looking down the list, you soon run into some numbers that boggle the mind. The Tianhe-2 (Milky Way-2), a system deployed at the National Supercomputer Center in Guangzho, China, is the number one system as of November 2015, a position it’s held since 2013. Running Linpack, it clocks in at 33.86 × 1015 floating point operations per second (33.86 PFLOPS).

Read more

Apr 26, 2016

Rave Computer to Offer NVIDIA DGX-1 Deep Learning System

Posted by in categories: robotics/AI, supercomputing

Now, I have been hearing folks are planning to experiment with block chaining on the new Nvidia DGX-1. I do know Nvidia’s CEO mentioned that DGX-1 could be used in conjunction with block chaining as an interim step to Quatum Computing to help secure information. We’ll see.


Sterling Heights, MI (PRWEB) April 24, 2016.

Rave Computer, an Elite Solution Provider in the NVIDIA Partner Network program, today announced that it has been selected to offer the new NVIDIA® DGX-1™ deep learning system, the world’s first deep learning supercomputer designed to meet the unlimited computing demands of artificial intelligence.

Continue reading “Rave Computer to Offer NVIDIA DGX-1 Deep Learning System” »

Apr 26, 2016

Can Proteins From Living Cells Solve Problems That Vex Supercomputers?

Posted by in categories: innovation, supercomputing

When nature knows best.

Read more

Apr 24, 2016

Molecular mechanical computer design 100 billion times more energy efficient than best conventional computer

Posted by in categories: energy, supercomputing

Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.

Existing designs for mechanical computing can be vastly improved upon in terms of the number of parts required to implement a complete computational system. Only two types of parts are required: Links, and rotary joints. Links are simply stiff, beam-like structures. Rotary joints are joints that allow rotational movement in a single plane.

Simple logic and conditional routing can be accomplished using only links and rotary joints, which are solidly connected at all times. No gears, clutches, switches, springs, or any other mechanisms are required. An actual system does not require linear slides.

Continue reading “Molecular mechanical computer design 100 billion times more energy efficient than best conventional computer” »

Apr 20, 2016

Physicists came up with a simple way you can outperform supercomputers at quantum physics

Posted by in categories: quantum physics, supercomputing

VIDEO: 300 gamers helped solve the problem.

Read more

Apr 15, 2016

SLAC researchers recreate the extreme universe in the lab

Posted by in categories: nuclear energy, physics, space, supercomputing

Conditions in the vast universe can be quite extreme: Violent collisions scar the surfaces of planets. Nuclear reactions in bright stars generate tremendous amounts of energy. Gigantic explosions catapult matter far out into space. But how exactly do processes like these unfold? What do they tell us about the universe? And could their power be harnessed for the benefit of humankind?

To find out, researchers from the Department of Energy’s SLAC National Accelerator Laboratory perform sophisticated experiments and computer simulations that recreate violent cosmic conditions on a small scale in the lab.

“The field of is growing very rapidly, fueled by a number of technological breakthroughs,” says Siegfried Glenzer, head of SLAC’s High Energy Density Science Division. “We now have high-power lasers to create extreme states of matter, cutting-edge X-ray sources to analyze these states at the atomic level, and high-performance supercomputers to run complex simulations that guide and help explain our experiments. With its outstanding capabilities in these areas, SLAC is a particularly fertile ground for this type of research.”

Read more

Apr 13, 2016

Are Humans the New Supercomputer?

Posted by in categories: information science, neuroscience, quantum physics, robotics/AI, supercomputing

Newswise — The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go—an achievement demonstrating the explosive speed of development in machine capabilities.

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the prestigious science journal Nature.

Read more

Apr 12, 2016

Supercomputers Aid in Quantum Materials Research

Posted by in categories: mathematics, quantum physics, supercomputing, transportation

Lov’n Quantum Espresso


Researchers use specialized software such as Quantum ESPRESSO and a variety of HPC software in conducting quantum materials research. Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves and pseudo potentials. Quantum ESPRESSO is coordinated by the Quantum ESPRESSO Foundation and has a growing world-wide user community in academic and industrial research. Its intensive use of dense mathematical routines makes it an ideal candidate for many-core architectures, such as the Intel Xeon Phi coprocessor.

The Intel Parallel Computing Centers at Cineca and Lawrence Berkeley National Lab (LBNL) along with the National Energy Research Scientific Computing Center (NERSC) are at the forefront in using HPC software and modifying Quantum ESPRESSO (QE) code to take advantage of Intel Xeon processors and Intel Xeon Phi coprocessors used in quantum materials research. In addition to Quantum ESPRESSO, the teams use tools such as Intel compilers, libraries, Intel VTune and OpenMP in their work. The goal is to incorporate the changes they make to Quantum ESPRESSO into the public version of the code so that scientists can gain from the modification they have made to improve code optimization and parallelization without requiring researchers to manually modify legacy code.

Continue reading “Supercomputers Aid in Quantum Materials Research” »

Apr 12, 2016

Can optical technology solve the high performance computing energy conundrum?

Posted by in categories: energy, quantum physics, supercomputing

Another pre-Quantum Computing interim solution for super computing. So, we have this as well as Nvidia’s GPU. Wonder who else?


In summer 2015, US president Barack Obama signed an order intended to provide the country with an exascale supercomputer by 2025. The machine would be 30 times more powerful than today’s leading system: China’s Tianhe-2. Based on extrapolations of existing electronic technology, such a machine would draw close to 0.5GW – the entire output of a typical nuclear plant. It brings into question the sustainability of continuing down the same path for gains in computing.

One way to reduce the energy cost would be to move to optical interconnect. In his keynote at OFC in March 2016, Professor Yasuhiko Arakawa of University of Tokyo said high performance computing (HPC) will need optical chip to chip communication to provide the data bandwidth for future supercomputers. But digital processing itself presents a problem as designers try to deal with issues such as dark silicon – the need to disable large portions of a multibillion transistor processor at any one time to prevent it from overheating. Photonics may have an answer there as well.

Continue reading “Can optical technology solve the high performance computing energy conundrum?” »

Page 81 of 94First7879808182838485Last