As the Moore’s law has come to a screeching halt first in terms of economics (in 2015) and now in terms of physical size of chips based on silicon, researchers are looking into new ways to keep increasing chip performances without having to rely on the shrinking of the transistors in the chip.
A collaboration between MIT and Stanford is aiming at increasing chips performances using carbon nanotubes (hence getting rid of the constraints posed by silicon) and integrating in the same chip the processing and the storage parts. This latter is important because the growing volume of data to be processed, and hence shuffled between the storage container and the processing engine, is creating a bottleneck in the transfer of data between the processing and the storage chip (notice that storing data on a chip, a kind of flash memory way faster than storing it on a magnetic media, is today a standard solution for top of the line computers, although a more expensive one than using mag support).
The researchers are reporting their result in a recently published paper on Nature.
They have created a chip based on a 3D architecture integrating 1 million of RRAM (Resistive Random Access Memory) with 2 million carbon nanotube, each one implementing a field effect transistor. As of 2017, this represents the most advanced chip based on carbon nanotubes used as transistors. Clearly we are far from the kind of densities that are now “normal” for chips based on silicon, where the count of transistors on a chip is over 1 billion (the Intel Broadwell EP-Xeon has 7.2 billion transistors and the Intel Stratix 10 – a field programmable gate array- has 30 billion transistors) but it is important because it shows that using nanotube is feasible and that they can be used in complex chip architectures (remember the first silicon chips? The famous Intel 8080 had 4,500 transistors….)-
The chip is manufactured through subsequent layering of CNFET (Carbon Nanotube Field Effect Transistors) used for sensing and processing, over RRAM used for storage and over CNFET used for processing. This latter layer connects to a layer of silicon also able to process data and managing the interface with other chips. This last layer is important since it provides a seamless way to interface with existing chips. The sandwiching of the RRAM with the two CNFET layers provides a very effective way of access to data. Actually, this architecture reminds me a bit of out retina architecture with three layers of neurones, acting as sensing, local storage and processing and then connecting to the optical nerve (a “standard” bus to exchange data).
In this first implementation the sensing part of the chip has been customised to perform gas analyses but it can be customised to many other sensing needs, like breath analyses for early detection of a number of diseases, including some forms of cancer.
In perspective this chip is important because of its efficiency (low power consumption thanks to the carbon nanotube and fast processing thanks to the colocation of storage and processing) which makes it very interesting in AI applications. It can become specialised to serve specific goals and in this sense a a million bit stored and processed by 2 million transistors may be just fine. Of the billion of transistors we have on our silicon chips today, only a tiny fraction is actually used for a specific task.