According to the BBC last June 18, IBM’s Supercomputer ‘Sequoia’ is now the world’s fastest computer at 16.32 petaflops per second, meaning it can do 16 million billion floating point operations (also known as ‘flops’) in a second! This is the first time Americans have taken back the supercomputing crown since losing it to China’s Tianhe computer two years ago. The other remarkable thing about Sequoia is, though it is 1.5 times faster than the previous champ, the Fujitsu K computer, it is three times more energy efficient than the K.
Computers like these are massively parallel computers, usually using thousands of off the shelf parts configured to run together at blazing speeds. An example of this is the IBM Roadrunner computer (the king of the hill three years ago), which used 16,000 IBM Cell processors. Those are the same processors used in Playstation 3!
The fastest supercomputer is usually determined twice a year by Top500. Their goal is to track and detect trends in supercomputing (so it isn’t just crowning a winner, Top500 also studies the technology used to get winners to be so fast). Since its establishment in 1993, the #1 computer’s performance has roughly followed Moore’s law, which states that computer performance doubles every 14 months.
Using Moore’s law, supercomputers are expected to reach 1 exaflop (a billion billion floating-point operations in a second) in 2019. Some companies hope to achieve it even earlier. IBM is developing the Cyclops64 architecture, which is already a supercomputer in one chip (imagine linking thousands of these together). Erik DeBenedictis of Sandia National Laboratories theorizes that by 2030, computers can reach a zettaflop (or one sextillion flops). That can do full weather modeling for a two week span (imagine declaring a typhoon alert two weeks ahead, and being right!)
I’ve seen comments like “all right!” and “it’s about time!” as if the U.S.’s regaining the supercomputing crown is a good thing. However, I find it interesting to learn HOW these computers are being used. The current champ, for instance, is used to simulate nuclear tests, so they don’t have to detonate real atom bombs. The Roadrunner computer is used to simulate whether the U.S.’s aging nuclear weapons arsenal is still safe and reliable to use. The U.S. is not shy about advertising how it uses its supercomputers. They still consider nuclear deterrence important (the core principle of this strategy is this: have so much nuclear bombs that your enemies are afraid of attacking you because of these bombs) but the idea is basically anti-survival and actually quite immoral. There should be NO nuclear weapons in existence. I approve of nuclear energy, but only for medical and scientific use and for generating electricity.
Contrast these to other countries’ use of supercomputers: China’s Tianhe is used to look for crude oil and for designing aircraft (though given China’s tendency to be secretive, if they use it for anything military they simply don’t advertise it, unlike the U.S., which would make Discovery Channel shows on all its weapons and tactics).
In Japan, two former supercomputer champions, the Earth Simulator and the just-dethroned K computer are used to simulate weather and climate change, hoping to find a way to give more advanced warning against impending typhoons and to prevent global warming. Why is the U.S. still using the impetus of maintaining nuclear weapons (or in the case of simulating nuclear weapons testing, probably improve their nuclear weapons) to drive supercomputer progress? I think their priorities are a little mixed up.