TTM 2014: Future of Processing

The evolution of processing has been driven by the continuous scaling of transistors. This is still continuing but for the first time we are seeing that the scaling does not go hand in hand with the decrease of price, actually the price is starting to increase exponentially as we try to scale down any further. We have reached 14nm and the 7 nm are expected by the end of this decade. The ultimate performances are limited by the power dissipation. Eventually, we will have to move to non-FET (Field Effect Transistors) transistors and alternative computation models.
Silicon has reached the end of its evolution, graphene seems the logical substitute, since it is 10 times faster than silicon and it has very low resistance to electrons, hence a very low heat generation (but a good heat dissipation), the major obstacle we are seeing today in further evolution in addition to economic sustainability.
So we can predict an evolution from FET (based on silicon) to CNT (Carbon NanoTube) providing higher speed with low power, in the order of 300-400mV. The challenge is to achieve sufficient purification of the nanotubes, by the way a problem that had to be faced in the last century to use silicon! Purification should be greater than 99.99% eliminating all metallic tubes that would result in the shortening of the circuit. Another crucial issue is the placement, place the tubes where you want them and a third one is the scalability to achieve high density and a gate length of the order of 9nm. New manufacturing approaches, and tools, area required. Etching does not work and bottom up growth, today, is not industrially viable. DNA based construction works for prototyping but does not scale at an industrial volume. CNT based chip manufacturing is today the greatest challenge.
However, information processing is more than the chip. It involves architectures and software, progressing in the next decades will require taking a look at the whole.
A possible evolution is in the direction of adiabatic computation. Adiabatic is a physical process where there is no energy exchange, hence the second law of thermodynamics doesn’t come in the way and you are not dissipating heat. Adiabatic computation exists only in theory, in practice you are aiming for quasi-adiabatic computation. One way would be to use superconducting materials but today the whole system (that would need to be cooled down to at least 100+ Kelvin) becomes very expensive. Moving to quantum computing and looking for adiabatic processing is an interesting area, however a quantum computer is not a substitute for today’s computers, it is a machine that would perform marvels in specific types of applications but not in others where our computers are good.
The problem in quantum processing is deeply tied with the quantum mechanics. One need to control the errors but doing so would require to interact with the processing to detect errors and the very fact that you are interacting would stop the processing.
According to the speakers it is unlikely that we will see an operational quantum computer within the next 25 years. What exists today are prototype with very limited processing capability or Q-computers that can perform very limited kinds of computation (D-Wave).
Interesting the comparison of the Watson Computer with the Brain: a difference in efficiency of about 1000, even considering Watson evolution in the coming years. However it is really tricky to compare today’s machine with our brain. Their structure is completely different as it is the computation paradigm. Our brain is pretty slow, its computation speed is measured in milliseconds, machines computation is today measured in nanoseconds, a million times faster. Besides our brain is a mixture of “digital” (neuron signal based) computation and “analogue” (neurotransmitter and neuromodulator) computation. Just making faster machine it is not enough to match our brain processing capability (not speed but capabilities). To evolve towards brain like processing capability we need to look more at cognitive science than to electronics. And, as for quantum computers, we will end up with a machine that will be very good at certain types of applications but will not substitute present computers. A machine working like my brain will make the same mistakes I make and it will be as slow as I am in calculation …
The progress in this direction, at the hardware level needs to look at low power structures mixing analogue and digital and seek for inspiration in the biological systems.New chips and programming environment have been created since 2012 (Synapse by IBM) but they are more a simulation than a real micro brain component. Neuromorphic computation is still a goal, not a well defined roadmap.
Overall we can expect biz as usual for the next 10-15 years whilst starting in the next decade we can see the emergence of new computational models, first with adiabatic computing, then neuro computing and then quantum computation.
The Moore’s law has driven the evolution in the last 50 years and it has also driven the market expectation and the market pricing! There is no way to see this law continuing till 2035. So the question is what will happen once it will be over, and mark you: today, and since 2013, it is on “life support”.
The obvious answer to the question is stagnation and commoditization (this happened to the car industry although one might not realize it because it was saved by the electronics and computers that made it possible to have better cars).
Neuro, quantum, grapheme computers may come too late to save the day. There is likely a gap of ten years between the time (2020) where no further down scaling of CMOS/FET is possible and the advent of new computations paradigms.
Possibly, efficiency will continue to improve by orders of magnitude through specialized processors but it won’t be enough to stave off commoditization; however processors have been fast enough in most situations over the last years. I/O and storage will continue to need improvements.
We are already seeing the market change in driving evolution: less focus on performance, much more on power consumption. The drive will be more in the area of ubiquitous internet connectivity at video bandwidth. A true machine intelligence will happen, possibly not by 2035 although it will be close enough in most situations. The trend towards specialized chips and specialized architectures (integration of memory and I/O in the processing unit, graphic processors) will account for most of the performance evolution in this and in the next decade giving a semblance of a continuation of the Moore’s law. Decrease in cost will continue but it will be fueled by better manufacturing processes and volumes, and not by densification of the chip.
Supercomputers will keep increasing their capabilities, because there is a need for unlimited processing power in certain areas, but the exascale performance (we are not far) is very very expensive and might possibly be achieved by worldwide cooperation like it happened for the LHC in Geneva.
One has to look where today’s computer are not good enough since out of that will come improvement efforts. Areas that clearly are in need for improvements are:

  • energy efficiency
  • usability, reliability
  • security
  • buggy code and lame update syndrome

As noted, chip economics are already breaking down, the replacement needs to be cheap
These are the goals that will need to be pursued in the next decades and they are not easy to meet. Also, it was noted that some world crises, like climate change, can be better understood, and foreseen, by order of magnitude higher processing capabilities. Technology will keep progressing and the occurrence of some critical conditions, like sea flooding large portion of inhabited land, will put strong pressure on scientists to speed up the processing capacity increase to meet these challenges. Possibly, joint international effort like the one that led to the constructions of the LHC in Geneva may be set up also in creating hyper processing capacity in the third decade of this century.
Calculating climate change (where will it be hot next Summer), forecasting weather for next month, mapping the precise direction of a hurricane for the next two days is beyond present processing capabilities and will remain so in this decade. Progress may be expected in the next one but still the goal will not be met by 2035 with the accuracy one might desire.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.