Processing speed has been increasing with an amazing regularity, the Moore’s law, and today we can crunch with a lap top what required a Supercomputer just 20 years ago. However, the number of data is growing at a faster speed and extracting meaning from these data, the Big Data, requires a crunching capacity that is increasing faster than the increase in processing speed!
Hence the interest in this news of a team of researchers at the MIT that have found a way to leapfrog today’s processing capacity in the area of crunching Big Data.
If data are stored in conventional hard drives the access time is measured in ms. On the other hand, if data get stored on flash memory the access time is in the order of µs, that is 1,000 times faster. This is what researchers have done. Each flash data storage is connected, on the same board, to a Field Programmable Gate Array Chip (FPGA) that can retrieve data and perform some operations on them.
The boards are connected one another through a very fast serial network with low latency letting data move from one node to the other in ns.
They have developed a storage prototype network with 16 nodes and a capacity of up to 32 TB. That would provide a good crunching support to people having to analyse Big Data. The idea is that you harvest data from several sources, store them into this storage-crunching device for your analyses and then, once done, move the data to more conventional storage. It is a bit like the concept of using cache in a computer for faster processing. Indeed we are already seeing this kind of approach being used in mass market products, like the latest MacPro from Apple.
The catch, obviously, is in the cost. Magnetic disc storage is way cheaper than flash memory storage but we know that technology evolution is bringing the cost down very rapidly.