Former World’s Fastest Supercomputer Now Obsolete, Will Be Scrapped

Just four years ago it was the fastest supercomputer in the world, now it’s going to be dismantled for parts:

Five years ago, an IBM-built supercomputer designed to model the decay of the US nuclear weapons arsenal was clocked at speeds no computer in the history of Earth had ever reached. At more than one quadrillion floating point operations per second (that’s a million billion, or a “petaflop”), the aptly-named Roadrunner was so far ahead of the competition that it earned the #1 slot on the Top 500 supercomputer list in June 2008, November 2008, and one last time in June 2009.

Today, that computer has been declared obsolete and it’s being taken offline. Based at the US Department of Energy’s Los Alamos National Laboratory in New Mexico, Roadrunner will be studied for a while and then ultimately dismantled. While the computer is still one of the 22 fastest in the world, it isn’t energy-efficient enough to make the power bill worth it.

“During its five operational years, Roadrunner, part of the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program to provide key computer simulations for the Stockpile Stewardship Program, was a workhorse system providing computing power for stewardship of the US nuclear deterrent, and in its early shakedown phase, a wide variety of unclassified science,” Los Alamos lab said in an announcement Friday.

Costing more than $120 million, Roadrunner’s 296 server racks covering 6,000 square feet were connected with InfiniBand and contained 122,400 processor cores. The hybrid architecture used IBM PowerXCell 8i CPUs (an enhanced version of the Sony PlayStation 3 processor) and AMD Opteron dual-core processors. The AMD processors handled basic tasks, with the Cell CPUs “taking on the most computationally intense parts of a calculation—thus acting as a computational accelerator,” Los Alamos wrote.

“Although other hybrid computers existed, none were at the supercomputing scale,” Los Alamos said. “Many doubted that a hybrid supercomputer could work, so for Los Alamos and IBM, Roadrunner was a leap of faith… As part of its Stockpile Stewardship work, Roadrunner took on a difficult, long-standing gap in understanding of energy flow in a weapon and its relation to weapon yield.”

Roadrunner lost its world’s-fastest title in November 2009 to Jaguar, another Department of Energy supercomputer combining AMD Opterons with Cray processors. Jaguar hit 1.76 petaflops to take the title, and it still exists as part of an even newer cluster called Titan. Titan took the top spot in theNovember 2012 supercomputers list with a speed of 17.6 petaflops.

And, someday, Titan too will be obsolete. Isn’t progress great?

Via Facebook

FILED UNDER: Science & Technology, , , ,
Doug Mataconis
About Doug Mataconis
Doug Mataconis held a B.A. in Political Science from Rutgers University and J.D. from George Mason University School of Law. He joined the staff of OTB in May 2010 and contributed a staggering 16,483 posts before his retirement in January 2020. He passed far too young in July 2021.

Comments

  1. Brett says:

    Even more impressive is that Titan is literally ten times faster than Jaguar, a mere three years later.

  2. mantis says:

    But don’t worry. That laptop or cellphone you buy will still be top of the line in a couple of years.

  3. john personna says:

    I suspect that “fastest supercomputer” competition and funding runs on a certain momentum. When funding Cray meant you could have silent submarine propellers, and the Soviets, with computer blockade, could not … it really mattered. Not only were supercomputer designs very specialized, and unlikely to be developed without national funding, but the answers really mattered.

    A critical change came in 1994 when Thomas Sterling and Donald Becker built the Beowulf cluster at NASA. The sea change was that commodity processors, because of their market feedback and funding, had become “fast because they were popular.” A custom system in the abstract might be faster than a pile-of-pc’s, but really from 1995 forward it could never be cheaper.

    Thus the game became, more or less, to build bigger piles of pcs. That pc-architecture had come to dominant the “server rack” made things that much easier.

    The whole idea that market feedback brings cheap speed repeated itself with “video cards,” to the point where “video processors” became record-breaking vector math machines. They were added to the piles of pc’s.

    And so … I don’t think the supercomputers themselves drive all that much (if any) processor speed innovation at this point. They are a game of organizing things that come out of the consumer(!) market.

    As new processors appear, and especially with lower power processors, it doesn’t take that long for a record supercomputer to be not worth the electricity to run it. That and .. I really don’t think there are all that many super important and time-pressured questions out there …

    … so cut the funding.

  4. Brett says:

    I am not sure what the problem might be here – it’s not like they are going to throw the parts away. This is (very approximately) like hooking a bunch of Playstations together. If it costs too much to run, that’s a reasonable reason to diassemble it, assemble something else from large quantities of other relatively cheap hardware. They will use the parts for something else.

    The days when a “supercomputer” was a masterpiece of custom-built hyper-expensive monolithic machine design are long gone (at least 20 years).

  5. @john personna: The interesting thing is that if you look at BOINC, its cumulative processing power is 9.465 petaFLOPs (over nine times the speed of Roadrunner), in no same part because of PC users’ GPUs.

  6. john personna says:

    @Timothy Watson:

    Weird that a hypothetical and isolated Soviet Union could never compete with us in supercomptuer innovation … because they wouldn’t have enough rich gamers.

  7. grumpy realist says:

    Ah yes…sell the parts to my ex-advisor and watch him be as happy as a clam. He kept getting thrown off super computers because he kept asking for more, more, MORE run time.

    All of the whole supercomputer stuff is going to go totally out of the window when they finally get the quantum computers up and running. Heh heh. Gonna need a lot of liquid nitrogen, though.

    (I did much of my thesis work on a CM-5, which was Zee Hottest Thing around when I worked on it. At least the company going belly up into bankruptcy gave me a good reason as to why I didn’t follow up on certain lines of research….)

  8. john personna says:

    @grumpy realist:

    Connection Machine took its own path from customized visions. The CM-1 had a very unique architecture, peddled as the future: ” The CM-1, depending on the configuration, had as many as 65,536 processors. The individual processors were extremely simple, processing one bit at a time.”

    Get that? 65,536 processors, of 1 bit each.

    The CM-5 was actually their capitulation to the “pile-of-pc’s” architecture:

    “With the CM-5, announced in 1991, Thinking Machines switched from the CM-2’s hypercubic architecture of simple processors to an entirely new MIMD architecture based on a fat tree network of SPARC RISC processors. The later CM-5E replaced the SPARC processors with faster SuperSPARCs. As of November 2012, the fastest system in the TOP500 list, the Titan with Rpeak of 27.1125 PFlop/s, is over 206,965 times faster than the fastest system in November 1993, the Connection Machine CM-5/1024 (1024 cores) with Rpeak of 131.0 GFlop/s.”

    They had moved from “1 bit” processors to 32 bit processors as their granularity size.

    So much for the original CM thesis.

  9. Tsar Nicholas says:

    Moore’s law, accelerated.

  10. Val West says:

    @grumpy realist: your advisor won’t be able to afford the power to run the computer, even when used in parts.