As the Moore’s law is slowing down also on the performance axes (it has stopped on the economics axes in 2015) researchers are looking at alternatives to keep improving the performances, both searching for materials alternative to silicon (notably in the area of 2D materials like graphene and molybdenum disulphide) and alternative architectures.
This is the case of IBM scientists that have just published a paper where they describe a computation structure alternative to the von Neumann that is used in today’s computer. The von Neumann computation paradigm separated the memory from the processor and data have to be shuffled back and forth from the memory to the processor, slowing down the computation.
In the approach followed by the IBM scientists computation takes place directly in the memory (see clip).
There is a catch (well, there is always a catch, isn’t it?): if you are performing in memory computation you are destroying the original data and you cannot go back. Sounds bad but in reality this is what happens in our brain. Whenever we think we are using data that are stored in our neuronal mesh and at the same time we are changing them, even so slightly (most of the time by thinking we are reinforcing those “data” but sometimes we create new aggregations that change them).
To avoid this problem the IBM scientists are proposing a structure where the in-memory computing is flanking the usual von Neumann computing and are using the new paradigm for specific computations. Guess which one! To support Deep Learning and in general AI computation!
Their “chip” is able to compute 200 times faster than a normal processor. They tested their chip capabilities and performance on two problems, recognising the image of Alan Turing (a good choice!) based on a matrix of a million pixels and charting the relevance of data coming from 270 meteo stations in US over a period of 6 months at a one hour interval from the point of view of weather forecast.