Looking ahead to 2050 – The Evolution Clock

I already observed that the evolution clock has not been “ticking” constantly through the ages. Actually for thousands of years it was almost standing still, sometimes making a few ticks and then stopping again. 

We saw it starting to move with the industrial revolution and it hasn’t stopped since. In the last fifty years it has increased its pace. And if you are looking for a reason you need to look at the amazing progress of electronics that has been well captured by the Moore’s observation. Over the years it proved so accurate to be called Moore’s “Law”.

Usually the Moore’s law is explained by saying that every 18 months the number of transistors per chip is bound to double. That of course leads to ever more performant chips.  Because of the way chips are made it also leads to lower energy consumption per transistor unit and faster processing.

However Moore’s observation was much more deep than that. He observed that the cost per transistor would keep decreasing as manufacturing processes will allow to package more transistors per chip with each manufacturing technology having a soft spot where a specific transistor density would lead to minum manufacturing cost. Try to cram more transistors and their cost will increase. Decrease the density and the cost will increase!

This is shown in the curves included in his 1965 paper. The goal of an engineer was to move this soft spot in such a way that every 18 months one could cram twice as many transistors still being in the “soft spot”.  That also meant that the cost per transistor would halve every 18 months.

What does this evolution clock mean? Well, from 1965 to 2015 we have increased the number of transistors on a chip from 2,000 to 15 billion. In the following 18 months we will reach 30 billion, that is we are increasing the number of transistor on a chip by same number increase that we had in the previous 50 years! That’s what doubling means. 

In other words we are going to do in 18 months what we did in 50 years. This is a compression of time and this is what leads to the perception of an accelerated evolution, the clock is thicking faster and faster.

Actually, there are two factors we need to take into account, that explaing why in the coming 18 monts we won’t be seing the same evolution we have seen in the past 50 years.

First the abundance of potential performance decreases the need for efficiency.  When I was hired back in 1971 as a fresh programmer (Assembler was considered an advanced programming language…) they tested my capaility in using the least number of machine instruction to read a character from a typewriter.  I think I made it in 11 machine instructions and apparently that was considered as acceptable because they hired me.

Nowadays being able to save a few bytes (by and large) is no longer the point.

The first electronic switch I worked on run on a computer that had a maximum space of 64Kbytes. Any photos that I take today with my digital camera is 300 times bigger than that.  This overall growth is decreasing the perception and the actuality of evolution: storage capacity increased a thousand time over the last ten years but the storage capacity in terms of number of photos that we can store has increased less than a hundred times in the same period, and this is what gives us the perception of growth.

The second factor is related to our perception of “better performance”. Our senses are customised to work on a logarithmic scale. This is what gives our eyes the possibility to manage from almost total obscurity to a very bright day and our ears to capture the click of an alarm clock (an old one…) as well as sustain the noise of a jammed city street.

Hence to perceive something twice as good it has actually to be ten times better! 

Looking ahead 30 years in terms of perceived progress in applied technology means therefore looking at something that is perceived 3 to 4 times better than what we have today although making it will indeed require a technology performance that is thousands times better of what we have today.

The age of silicon is slowing down its fast pace of evolution, clocked for 52 years by the Moore’s law of a doubling of transistor per chip every 18 months with a parallel decrease in the manufacturing cost per transistor.

Several of the parameters used to track the evolution of computing are leveling out:

  • Frequency
  • Power
  • Single Thread performance

What still keeps growing is the number of transistors per chip but the shrinking dimension no longer translates into better performance: moving from 14nm down to 10 has not led to significant performance improvement (because it did not translate into a higher clock frequency).  The number of cores per chip is growing, more than ever, but that is just showing the quest for alternatives to keep on bettering chips performance that no longer can be obtained through the Moore’s Law. Besides, only some specific types of computation can really take advantage of the increased number of cores.

Most importantly, Moore’s law has already failed from the economics perspective that was the one that actually fueled it by increasing the market (and hence the volumes). Since 2014 the cost per transistor no longer decreases as density increases, rather it is growing.

This will lead over the next decades to a dramatic change in our perception of progress and in the structure of the market.

In the 70ies there was a strong aftermarket business. You bough a proto-PC and you extended its capabilities through after market provided components. Starting in the late 80ies the evolution became so quick that it made more sense to replace your old (yet young) PC with a new one rather than refurbishing it.

This process will be reversed in the next decade when the lifetime of a computation device will start to become longer and it would make sense again to refurbish it, rather than replacing it.

In two decades time our kids may buy some computation enabling assett with the backmind of handing it over to their children as part of the inheritance. Think about computation embedded in a home… You won’t replace the home to benefit from better computation capability, you’ll rather look into refurbishing your home computation and data rendering capability….

A new wallpaper may provide better rendering capabilities, new roof tiles may improve your home energy efficiency, a new plaster on the walls may provide a better environmental sensing….

 

All of this will drive us to a new space where the evolution clock will tick differently and most important the individual ticks will no longer be driven by the Moore’s law.

Moore’s Law is dead, long live Moore’s Law.  

The Moore’s Law went beyond electronics. As electronics permeated broader and broader areas, these areas benefitted from the evolution of electronics and in some cases the evolution took a spin of its own. 

A clear example is in gene sequencing. It wouldn’t have been possible to sequence the human genome without the amazing progress in processing (and storage) capacity. Yet, this progress has led to the definition of new approaches to gene sequecing (applying molecular computing and massively parallel sequencing) that now an increase in processing capacity is no longer needed to sustain further process in sequencing. Indeed, the sequencing keeps increasing in speed and decreasing in cost, and it is expected to continue to do so in the next decades, without requiring the use of higher processing capacity. Its evolution has taken up a pace of its own.

This is known as “more than Moore” and has already extended to cover genomics, photonics, sensors … Besides Moore’s law is now overtaken in pace in the area of data where we are seeing a tremendous (with no ending in sight) increase in data generated. It has actually grown so fast that the gap between data generated and data actually used is widening. Today we are making use of just a fragment of the data that are generated…

In addition the Moore’s Law is seeing a multiplication of effects by the increased networking of several components, like cellphones that are shifting from being pure terminals to become network nodes. This is actually extending the Moore’s law with “More Moore”! Whilst in the “More than Moore” we see a broadening of functionalities in the “More Moore” we see an increase in performance.

 

Both these phenomena are at work today and will continue in the coming decades.

Notice the importance of the “networking” aspect. We are starting to see a Moore effect on ideas and in research. The paradigm of Open Innovation is leveraging on this effect.  In a way, we are moving towards a “super-engineer” resulting from the networking of engineers. Whether this Super entity will be harvested by a person leveraging the superset or if this will be harvested by a machine remains to be seen. More of this later.

It should also be noticed that the usual increase of performance associated to the decrease of dimension in the silicon etching is no longer true. We have reached 10 nm and this has not resulted in an improvement in performance over the 14nm. Anyhow the limit of 5nm beyond which quantum effects become prevalent and make it unpractical to move beyond is now near (it would be the next step).

Notice that in principle there are no limitation to the amount of data that can be generated, harvested and processed. Hence a Moore-like law on data is not facing any physical limitation.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.