Do we need an exascale computer?

In the last 18 months Thiane-2 has kept its position as the fastest supercomputer, crunching data at 33.86 PFLOPS.  This is somewhat unusual since we have been used to see at least every year a brand new supercomputer making the top of the list at the bi-annual conference on Supercomputers. You can take a look at the clip to get some insight on this.
Of course, 33.86 PFLOPS is no slouch. And the cost for getting this king of performance is not trivial. Thiane-2 is made up with 16,000 processing nodes each one comprising 2 Ivy Bridge Xeon processor and 3 Xeon Phi coprocessor for a total of 3,120,000 cores and a coprocessor storage of 1.34 PB. This behemoth uses an aggregated 24 MW of power, 7 of that for cooling.  You can imagine the cost.
Well using this kind of technology and scaling it up for an exascale computer you would require around 500 MW of power, basically half the output of a nuclear power plant and the cost skyrockets.
More than that. In supercomputer the most important parameter is how much of its crunching power you can actually use, rather than the amount of power measured by PFLOPS tests. And this amount depends on the kind of problem your are tackling and the architecture of the computer.
This makes supercomputers quite different from your normal PC. It is not like getting a more performant one, loading on it your programs and getting better performances. What you need to do is to "re-write" most of your software to take advantage of the new architecture.  We have seen over 50 years of evolution in supercomputers since the Cray 1 in the seventies, a shift from specialised processors to mass market processors (like the ones used in the PS2…). Now we may be on the brink of a further shift that will require specialised processors and specialised architectures to address very specific problems. The "generalist" supercomputer may be fading out.
This shift is driven both by the unreasonable amount of power needed as the number of cores grows as well as the specific architecture needed to better leverage the raw crunching capacity in a specific sector.
Now, this begs the question of this post: do we really need an exascale computer (that is one that can crunch 1,000 PFLOPS)? 
There are some tough problems that requires as much processing power as you can get. 
Predicting the weather is now a matter of processing power. We have plenty of sensors around the world (and yes you can include in this the information coming from the thousands of airplanes -there are about 25 and 27 thousands planes taking off and landing every day- all around the world) and the more processing power the more accurate and the more long term forecast can be made.
There a few more, like simulating a nuclear explosion, assessing climate change over a century and so on.  However, although important all of these seem to be pretty specialised areas not directly affecting our life.
And yet, in the article discussing the path towards an exascale computer I found one paragraph that made me think. It is about the simulation of blood through the arteries and vein in the heart.
According to the article simulating the flow of blood in the coronary system would tala a Thiane-2 supercomputer more than 5 hours. Having an exascale computer with the right architecture may cut down this simulation time to few minutes and that would make possibile to run a simulation during a surgical operation on the heart to fix circulatory problems. I guess the idea is to place stents or other prosthetics and monitor the blood flow  through sensors (pressure, speed…) and to simulate the impact over the years. Based on the simulation the surgeon might change the placement of the stents till she gets the best hydrodynamics.
What made me think is the possibility of using a supercomputer to support "normal" activities. As the number of sensors increases (and it will by a thousand fold in this decade) we will have many more data that appropriately crunched may deliver information we cannot dream about today.  And this may bring supercomputers closer to us.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.