Looking ahead to 2050 – Stretching the Moore’s Law

If the shrinking of size has reached its practical physical limits (notice the the theoretical limits are far away, but they are not, in the present environment “practical”) it is also true that researchers are looking for alternatives to present manufacturing approaches.

The first is to look into different materials that can substitute the silicon.

When working with silicon we work with “clumps” of atoms, hence when we say that a transistors now has reached the dimension of 30 nm (roughly speaking the best etching today is 10nm, you need a P-N-P structure, hence the actual dimension is 30 nm). This is in one dimension but the transistor is actually extending in 3 dimension, so even keeping it at the same “size” in all 3 dimension (which normally is not true since in the 3rd dimension, Z axis, it is thicker) it would mean using some thousands atoms of silicon for each single transistor.

In the labs there have been demonstration of transistor “functionality” using a single molecule (a few atoms) but an industrial manufacturing process results in several thousands atoms for a single transistor.

A significant progress is expected by moving from silicon to 2D materials, like graphene and molibdenum disulfide, that can be shaped as a single layer of atom. It has been demonstrated the possibility to have Shotcky junctions using these materials, meaning that it is possibile to create 2D transistors.

Interestingly, 2D materials may also operate with surface electrons, plasmons. This reduces tremendously the power and dissipation (they operate like superconductor) which in principle can lead to higher density, higher speed and lower consumption.

The problem is how to create seamless 2D layers through high yield industrial processes. This is the goal of the Graphene Flagship Project funded by the EU over a 7 year period, whose results are starting to come but whose success will only become measurable in the next decade.

It is unlikely to have a massive substitution of silicon with these 2D structures already in the next decade, it is probably something for the 4th and 5th decade in this centure, if all goes well.

We cannot wait for 2D structures to replace silicon, some alternative route to evolution has to be found. Computation today is lagging behind, most data collected today are discarded within 3 hours because we do not have enough memory for storing them.  This is known as the von Neumann bottleneck, since von Neumann 60 years ago stated that the lack of memory is what limits computation. 

Also interesting is the observation that processor, today as yesterday, are sitting idle 90% of the time waiting for data to be transferred from memory to processor and vice versa.

Hence the next steps are focusing on:

  • moving storage from outside of the chip (DRAM/Disc) to the inside
  • changing connection  from copper to light guides – optical connection between processor and storage
  • moving memory to the center of computation with all processors around the memory performing computation on the memory itself (see figure)

 

The rebooting computing initiative launched by the IEEE Future Direction Committee is fostering a collaboration among several companies to create memory centric computation.

Specialised machines should satisfy the three pillars for future evolution: Energy Efficiency, Security and Interfacing. The push towards more performing architectures is been pursued by looking at adiabatic computation, new CMOS architectures, non von Neumann computation like quantum computer.  The recent Google announcement of a 1000 qbits quantum computer just round the corner does not seem realistic, but we just need to wait and see. It should appear in 2017, someone is questioning the actual feasibility, not just the timing.

Some sort of quantum computation is already available and probably there is not a clear cut line defining if there is a quantum computation or not, the boundary is fuzzy. So we can expect more effective solutions coming on that boundary, in this decade and in the next one, whilst an indisputable quantum computer may take a bit longer to become available.  Also it is like that we won’t have a generalistic quantum computer as we are used to have a generalistic PC. It is more likely that we will get a variety of quantum computers each specialised for certain kinds of computations.

The studies on approximate computation that seek inspiration from the brain are creating a mixed feeling among the experts and there is a broad range of opinions.

An example of specialised computational machines is represented by molecular computing. It is not the sort of computer you would use to make 2+2 since it would result in a result like “somewhere around 4, but also somewhere around 5”. Notice that the answer is not wrong, it is just not as accurate as we would expect for such a straightforward calculation.

Molecular computing is already extensively used in genetics and bioengineering, on labs on a chip, as an example, chips made by (micro) pipes and reservoirs, where molecules and enzymes interact to generate a “result”. 

Genome sequencing makes use of molecular computing (even though in the end the results obtained by massive parallel computation are used as input to a classical vonNeuman computer that generates the actual sequencing by interpreting the intermediate results.

 

Problems that are not suited to algorithmic computation, like the travelling salesman, can be addressed satisfactorily by molecular computers.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.