In memory computing for AI

The new in memory computing chip based on PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. By applying a tiny electric current to the material, it heats up and alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM scientists have used the crystallization dynamics to perform computation in memory. Credit: IBM Research

As the Moore’s law is slowing down also on the performance axes (it has stopped on the economics axes in 2015) researchers are looking at alternatives to keep improving the performances, both searching for materials alternative to silicon (notably in the area of 2D materials like graphene and molybdenum disulphide) and alternative architectures.

This is the case of IBM scientists that have just published a paper where they describe a computation structure alternative to the von Neumann that is used in today’s computer. The von Neumann computation paradigm separated the memory from the processor and data have to be shuffled back and forth from the memory to the processor, slowing down the computation.

In the approach followed by the IBM scientists computation takes place directly in the memory (see clip).

There is a catch (well, there is always a catch, isn’t it?): if you are performing in memory computation you are destroying the original data and you cannot go back. Sounds bad but in reality this is what happens in our brain. Whenever we think we are using data that are stored in our neuronal mesh and at the same time we are changing them, even so slightly (most of the time by thinking we are reinforcing those “data” but sometimes we create new aggregations that change them).

To avoid this problem the IBM scientists are proposing a structure where the in-memory computing is flanking the usual von Neumann computing and are using the new paradigm for specific computations. Guess which one! To support Deep Learning and in general AI computation!

Their “chip” is able to compute 200 times faster than a normal processor. They tested their chip capabilities and performance on two problems, recognising the image of Alan Turing (a good choice!) based on a matrix of a million pixels and charting the relevance of data coming from 270 meteo stations in US over a period of 6 months at a one hour interval from the point of view of weather forecast.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.