banner



Big Changes are Finally On the Horizon for Supercomputers

Looking dorsum at this week's ISC 17 supercomputing briefing, it looks like the supercomputing world volition see some large upgrades in the side by side couple of years, but the update to the twice-yearly Top 500 list of the world'due south fastest supercomputers wasn't very unlike from the previous version.

The fastest computers in the world continue to exist the two massive Chinese machines that have topped the list for a few years: Sunway TaihuLight figurer from Red china'south National Supercomputing Center in Wuxi, with sustained Linpack performance of more than 93 petaflops (93 1000 trillion floating signal operations per 2nd); and the Tianhe-2 reckoner from Communist china's National Super Computer Heart in Guangzhou, with sustained performance of more than 33.eight petaflops. These remain the fastest machines by a huge margin.

The new number three is the Piz Daint arrangement from the Swiss National Supercomputing Middle, a Cray organization that uses Intel Xeons and Nvidia Tesla P100s, which was recently upgraded to give it a Linpack sustained performance of xix.6 petaflops, twice its previous total. That moved it up from number 8 on the listing.

This drops the top The states system—the Titan organisation at the Oak Ridge National Laboratory—down to fourth identify, making this the beginning time in twenty years that at that place is no The states organisation in the top three. The rest of the listing remains unchanged, with the US notwithstanding accounting for five of the top ten overall, and Japan for two.

Even if the fastest figurer listing hasn't changed much, there are large changes elsewhere. On the Green 500 listing of the well-nigh power-efficient systems, ix of the top x changed. On top is the Tsubame iii.0 system, a modified HPE ICE XA system at the Tokyo Institute of Engineering based on a Xeon E5-2680v4 14 core, Omni-Path interconnect, and Nvidia's Tesla P100, which allows for 14.1 gigaflops per watt. This is a huge bound from Nvidia'southward DGX Saturn V, based on the firm's DGX-1 platform and P100 fries, which was number one on the Nov list merely number ten this fourth dimension, at 9.5 gigaflops/Watt. The P100 is in 9 of the pinnacle ten Green500 systems.

Top 500

Breaking ten gigaflops/watt is a big deal because it means that a hypothetical exaflop arrangement congenital using today'south technology would consume nether 100 megawatts (MW). That's yet too much—the target is 20-30 MW for an exaflop arrangement, which researchers hope to see in the next five years or so—but it's a huge step forward.

Like the Meridian 500 list, there were only minor changes on similar lists with different benchmarks, such as the High Functioning Conjugate Gradients (HPCG) criterion, where machines tend to see just 1-10 percent of their theoretical peak performance, and where the peak organization—in this instance, the Riken K machine—still delivers less than 1 petaflop. Both the TaihuLight and the Piz Daint systems moved upwards on this list. When researchers talk about an exaflop machine, they tend to mean the Linpack benchmark, but HPCG may be more realistic in terms of real-world performance.

The emergence of GPU computing every bit an accelerator—about always using Nvidia GPU processors such equally the P100—has been the most visible alter on these lists in recent years, followed past the introduction of Intel's own accelerator, the many-core Xeon Phi (including the virtually contempo Knights Landing version). The current Top 500 list includes 91 systems that are using accelerators or coprocessors, including 74 with Nvidia GPUs and 17 with Xeon Phi (with another iii using both); one with an AMD Radeon GPU as an accelerator, and two that utilise a many-core processor from PEZY Computing, a Japanese supplier. An additional thirteen systems at present use the Xeon Phi (Knights Landing) equally the main processing unit.

But many of the bigger changes to supercomputers are withal on the horizon, as we offset to see larger systems designed with these concepts in mind. 1 example is the new MareNostrum 4 at the Barcelona Supercomputing Center, which entered the Elevation 500 list at number xiii. Equally installed so far, this is a Lenovo system based on the upcoming Skylake-SP version of Xeon (officially the Xeon Platinum 8160 24-core processor). What's interesting here are the three new clusters of "emerging technology" planned for the next couple of years, including 1 cluster with IBM Power ix processors and Nvidia GPUs, designed to have a top processing capability of over 1.5 Petaflops; a 2d based on the Knights Hill version of Xeon Phi; and a tertiary based on 64-bit ARMv8 processors designed by Fujitsu.

These concepts are being used in a number of other major supercomputing projects, notably several sponsored by the United states Department of Energy as function of its CORAL Collaboration at the Oak Ridge, Argonne, and Lawrence Livermore National Labs. First upward should be Pinnacle at Oak Ridge, which will use IBM Power 9 processors and Nvidia Volta GPUs, and slated to deliver over 150 to 300 peak petaflops; followed by Sierra at Lawrence Livermore, slated to deliver over 100 height petaflops.

Nosotros should then see the Aurora supercomputer at the Argonne National Laboratory, based on the Knights Loma version of Xeon Phi and built by Cray, which is slated to deliver 180 tiptop petaflops. The CORAL systems should be up and running next yr.

Meanwhile, the Chinese and Japanese groups take planned upgrades besides, mostly using unique architectures. It should exist interesting to watch.

An even bigger shift seems to exist just a trivial farther off: the shift toward automobile learning, typically on massively parallel processing units inside the processor itself. While the Linpack number refers to 64-bit or double-precision performance, there are classes of applications—including many deep neural network-based applications—that work better with single- or even half-precision calculations. New processors are taking advantage of this, such as Nvidia'south contempo Volta V100 annunciation and the upcoming Knights Factory version of Xeon Phi. At the show, Intel said that version, which is due to be in product in the fourth quarter, would have new instruction sets for "low-precision computing" called Quad Fused Multiply Add together (QFMA) and Quad Virtual Neural Network Teaching (QVNNI).

I presume that these concepts could be practical to other architectures as well, such as Google's TPUs or Intel's FPGAs and Nervana chips.

Even if we aren't seeing big changes this yr, next twelvemonth we should expect to encounter more. The concept of an exascale (1000 teraflops) motorcar is yet in sight, though it will likely involve a number of fifty-fifty larger changes.

Source: https://sea.pcmag.com/feature/16274/big-changes-are-finally-on-the-horizon-for-supercomputers

Posted by: bennerjusible.blogspot.com

0 Response to "Big Changes are Finally On the Horizon for Supercomputers"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel