Ditching RAM may lead to low-cost supercomputers

Many servers , supercomputers and other monster systems thrive on high-speed RAM to keep things running smoothly, but this memory is wildly expensive — and that limits not just the number of nodes in these clusters, but who can use them. MIT researchers may have a much more affordable approach in the future, though. They’ve built a server network (not shown here) that drops RAM in favor of cheaper and slower flash storage, yet performs just about as well. The key was to get the flash drives themselves (or specifically, their controllers) to pre-process some of the data, instead of making the CPUs do all the hard work. That doesn’t completely close the speed gap, but the differences are virtually negligible. In one test, 20 servers with 20TB of flash were about as fast as 40 servers with 10TB of RAM. This doesn’t mean that flash-centric computing will be useful absolutely everywhere. MIT has only demonstrated its technique helping out with database-heavy tasks like ranking web pages. This wouldn’t necessarily help much with tasks that depend more on calculations, and the networked design means it this RAM-less approach wouldn’t do much to help your home PC. All the same, this could help a lot if it lets your favorite cloud service run faster, or helps cost-conscious scientists devote money toward other projects. [Image credit: AP Photo/Jens Meyer] Filed under: Storage , Science Comments Source: MIT News

Originally posted here:
Ditching RAM may lead to low-cost supercomputers

Chinese supercomputer destroys speed record and will get much faster

Lights on the Tianhe-2 supercomputer change color depending on the power load. Jack Dongarra A Chinese supercomputer known as Tianhe-2 has been measured at speeds of 30.65 petaflops, or 74 percent faster than the current holder of the world’s-fastest-supercomputer title. The speed is remarkable partly because the Intel-based Tianhe-2 (also known as Milkyway-2) wasn’t even running at full capacity during testing. A five-hour Linpack test using 14,336 out of 16,000 compute nodes, or 90 percent of the machine, clocked in at the aforementioned 30.65 petaflops. (A petaflop is one quadrillion floating point operations per second, or a million billion.) Linpack benchmarks are used to rank the Top 500 supercomputers in the world . The Top 500 list’s current champion is Titan, a US system that hit 17.59 petaflops. Tianhe-2 achieved 1.935 gigaflops per watt, which is slightly less efficient than Titan’s 2.143 gigaflops per watt. Tianhe-2’s numbers were revealed this week in a paper by University of Tennessee professor Jack Dongarra, who created the Linpack benchmarks and helps compile the bi-annual Top 500 list. Dongarra’s paper doesn’t say whether Tianhe-2’s Linpack measurement was officially submitted for inclusion in the Top 500 list. Ars has asked him if the measurement will put Tianhe-2 on top when the next list is released, but we haven’t heard back yet. In any case, the new Top 500 rankings will be unveiled on June 17. Read 4 remaining paragraphs | Comments

See the article here:
Chinese supercomputer destroys speed record and will get much faster

Stanford seizes 1 million processing cores to study supersonic noise

In short order, the Sequoia supercomputer and its 1.57 million processing cores will transition to a life of top-secret analysis at the National Nuclear Security Administration, but until that day comes, researchers are currently working to ensure its seamless operation. Most recently, a team from Stanford took the helm of Sequoia to run computational fluid dynamics simulations — a process that requires a finely tuned balance of computation, memory and communication components — in order to better understand engine noise from supersonic jets. As an encouraging sign, the team was able to successfully push the CFD simulation beyond 1 million cores, which is a first of its kind and bodes very well for the scalability of the system. This and other tests are currently being performed on Sequoia as part of its “shakeout” period, which allows its caretakers to better understand the capabilities of the IBM BlueGene/Q computer. Should all go well, Sequoia is scheduled to begin a life of government work in March. In the meantime, you’ll find a couple views of the setup after the break. Filed under: Science Comments Via: TechCrunch , EurekAlert Source: Stanford

Read the original:
Stanford seizes 1 million processing cores to study supersonic noise

Cray unleashes 100 petaflop XC30 supercomputer with up to a million Intel Xeon cores

Cray has just fired a nuclear salvo in the supercomputer wars with the launch of its XC30, a 100 petaflop-capable brute that can scale up to one million cores. Developed in conjunction with DARPA , the Cascade -codenamed system uses a new type of architecture called Aries interconnect and Intel Xeon E5-2600 processors to easily leapfrog its recent Titan sibling, the previous speed champ. That puts Cray well ahead of rivals like China’s Tianhe-2 , and the company will aim to keep that edge by supercharging future versions with Intel Xeon Phi coprocessors and NVIDIA Tesla GPUs . High-end research centers have placed $100 million worth of orders so far (though oddly, DARPA isn’t one of them yet), and units are already shipping in limited numbers — likely by the eighteen-wheeler-full, from the looks of it. Continue reading Cray unleashes 100 petaflop XC30 supercomputer with up to a million Intel Xeon cores Filed under: Misc , Science Cray unleashes 100 petaflop XC30 supercomputer with up to a million Intel Xeon cores originally appeared on Engadget on Thu, 08 Nov 2012 10:58:00 EDT. Please see our terms for use of feeds . Permalink   The Register  |   |  Email this  |  Comments

Read the original post:
Cray unleashes 100 petaflop XC30 supercomputer with up to a million Intel Xeon cores

$99 Raspberry Pi-sized “supercomputer” hits Kickstarter goal

A prototype of Parallella. The final version will be the size of a credit card. Adapteva A month ago, we told you about a chipmaker called Adapteva that turned to Kickstarter in a bid to build a new platform that would be the size of a Raspberry Pi and an alternative to expensive parallel computing platforms. Adapteva needed at least $750,000 to build what it is calling “Parallella”—and it has hit the goal. Today is the Kickstarter deadline, and the project is up to more than $830,000  with a few hours to go. ( UPDATE : The fundraiser hit $898,921 when time expired.) As a result, Adapteva will build 16-core boards capable of 26 gigaflops performance, costing $99 each. The board uses RISC cores capable of speeds of 1GHz each. There is also a dual-core ARM A9-based system-on-chip, with the 16-core RISC chips acting as a coprocessor to speed up tasks. Adapteva is well short of its stretch goal of $3 million, which would have resulted in a 64-core board hitting 90 gigaflops, and built using a more expensive 28-nanometer process rather than the 65-nanometer process used for the base model. The 64-core board would have cost $199. Read 2 remaining paragraphs | Comments

Visit link:
$99 Raspberry Pi-sized “supercomputer” hits Kickstarter goal