Carbon nanotubes powering the chip of the Future

The architecture of the chip created by MIT and Stanford, based on carbon nanotubes. It comprises four vertical layers. Top (fourth layer): sensors and more than one million carbon-nanotube field-effect transistor (CNFET) logic inverters; third layer, on-chip non-volatile RRAM (1 Mbit memory); second layer, CNFET logic with classification accelerator (to identify sensor inputs); first (bottom) layer, silicon FET logic. Credit: Max M. Shulaker et al./Nature

As the Moore’s law has come to a screeching halt first in terms of economics (in 2015) and now in terms of physical size of chips based on silicon, researchers are looking into new ways to keep increasing chip performances without having to rely on the shrinking of the transistors in the chip.

A collaboration between MIT and Stanford is aiming at increasing chips performances using carbon nanotubes (hence getting rid of the constraints posed by silicon) and integrating in the same chip the processing and the storage parts. This latter is important because the growing volume of data to be processed, and hence shuffled between the storage container and the processing engine, is creating a bottleneck in the transfer of data between the processing and the storage chip (notice that storing data on a chip, a kind of flash memory way faster than storing it on a magnetic media, is today a standard solution for top of the line computers, although a more expensive one than using mag support).

The researchers are reporting their result in a recently published paper on Nature.

They have created a chip based on a 3D architecture integrating 1 million of RRAM (Resistive Random Access Memory) with 2 million carbon nanotube, each one implementing a field effect transistor. As of 2017, this represents the most advanced chip based on carbon nanotubes used as transistors. Clearly we are far from the kind of densities that are now “normal” for chips based on silicon, where the count of transistors on a chip is over 1 billion (the Intel Broadwell EP-Xeon has 7.2 billion transistors and the Intel Stratix 10 – a field programmable gate array- has 30 billion transistors) but it is important because it shows that using nanotube is feasible and that they can be used in complex chip architectures (remember the first silicon chips? The famous Intel 8080 had 4,500 transistors….)-

The chip manufacturing process is based on subsequent layering. Credit: MIT- Stanford

The chip is manufactured through subsequent layering of CNFET (Carbon Nanotube Field Effect Transistors) used for sensing and processing, over RRAM used for storage and over CNFET used for processing. This latter layer connects to a layer of silicon also able to process data and managing the interface with other chips. This last layer is important since it provides a seamless way to interface with existing chips. The sandwiching of the RRAM with the two CNFET layers provides a very effective way of access to data. Actually, this architecture reminds me a bit of out retina architecture with three layers of neurones, acting as sensing, local storage and processing and then connecting to the optical nerve (a “standard” bus to exchange data).

In this first implementation the sensing part of the chip has been customised to perform gas analyses but it can be customised to many other sensing needs, like breath analyses for early detection of a number of diseases, including some forms of cancer.

In perspective this chip is important because of its efficiency (low power consumption thanks to the carbon nanotube and fast processing thanks to the colocation of storage and processing) which makes it very interesting in AI applications. It can become specialised to serve specific goals and in this sense a a million bit stored and processed by 2 million transistors may be just fine. Of the billion of transistors we have on our silicon chips today, only a tiny fraction is actually used for a specific task.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.