By 2020, around 20 billion connected devices will generate data at a rate that far outpaces our ability to process, store, manage and to secure it with today’s technologies.
Simply put, our legacy systems won’t be able to keep up because the fundamental architecture of computing hasn’t changed in more than sixty years. It’s an architecture that will soon reach its physical and computational limits.
In today’s computers, as much as 90 per cent of their work is devoted to shuffling information between the separate tiers of memory and storage. Computers typically depend on several different data storage technologies simultaneously, each tier overcoming the limitations of the others.
Also, as we move from data generated by ‘systems of record’ (e.g. book keeping), through ‘systems of engagement’ (e.g. social media), to the approaching and much bigger ‘systems of action’ (e.g. device-generated data in the internet of things), we face a reality that, in many cases, we cannot afford to bring the data back to the data centre for analysis.
Either the data is too big to move, or by the time we have finished processing, its value is out-of-date. We need to consider processing where the data is created and bringing back the results of the analysis rather than the data itself.
Memory-driven computing - realising the concept
Hewlett Packard Enterprise (HPE) believes that only by redesigning the computer from the ground up - with memory at the centre - will we be able to overcome existing performance limitations. HPE calls this architecture ‘memory-driven computing’. Instead of one general central processor reaching out to multiple data stores, data is held in a non-volatile memory pool at the centre, to be accessed by multiple task-specific processors.
HPE has been exploring and developing this architecture under the banner of The Machine research project. Some key attributes of the memory-driven computing architecture are:
- Scalable designed to be almost infinitely scalable - Small enough to fit in a sensor, large enough to replace a data centre or supercomputer. One of the most exciting properties of the architecture is the ability to accommodate vast amounts of memory; calculations indicate it can access any bit in 160 petabytes of memory in just a few hundred nanoseconds;
- Non-volatile memory pools - For fast, persistent and virtually unlimited memory Memory-driven computing will collapse the traditional distinction between memory and storage. This technology will enable fast manipulation of massive datasets in ways that are impossible today. It will combine the cost benefits of DRAM, the speed of RAM, and the non-volatility of flash memory along with greater durability;
- Photonics for greater throughput - The copper-based electrical connections used today will be replaced with optical communication links, using micrometre-scale lasers on microchips to convert electrical signals to light, and back again. Photonic data transmission overcomes the distance restrictions between the processors and memory modules, offers far more data throughput than copper and promises to revolutionise computation itself by replacing energy-hungry electronic devices with photonic circuits;
- Memory semantic fabric - This is at the heart of memory-driven computing. By using a single, high-performance byte-addressable interconnect we can simplify programming and enable vast pools of shared memory. It handles all communication as simple memory operations unlike storage access to disks, which is block-based and managed by complex, code-intensive software stacks. Gen-Z Consortium is a new industry group devoted to creating such an open interconnect standard, which is ideal for memory-driven computing;
- Task-specific processing - The memory-driven computing architecture was designed from the outset to support any type of processor: x86, GPU, DSP, FPGA and new types not commercialised yet such as neuromorphic, optical or even quantum accelerators. This enables every compute task to be completed in the shortest possible time using the least amount of energy;
- Protected - Both architecture and operating systems are being designed to support much higher levels of security and assurance than today’s commercial systems - protecting data from the silicon upwards: at rest, data-in-use, and data-in-movement.
Open memory-driven computing was designed as an open architecture in order to foster a vibrant innovation ecosystem and enable a wide range of companies to make components that can slot right in.
First proofs of concept
HPE has completed a range of activities to prove the viability of various aspects of the memory-driven computing architecture. The first proof-of-concept prototype, which was brought online in October 2016, showed the fundamental building blocks of the new architecture working together.
During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. Using new software programming tools, running on big-memory systems available today, has demonstrated improved execution speeds of up to 8,000 times on a variety of workloads.
Some use cases
Research memory-driven computing architecture is a natural fit for graph analytics. It lets us easily store the full history - a snapshot of every state the graph has ever been in, if needed - which enables us to look at things that changed over time.
These snapshots can also be used to establish a baseline (normal operation) and therefore easily identify any deviation. This potentially has many applications in business analysis, for example, of an airline’s flights dataset. Equally, it can be applied to system data, for example, to identify an unfolding security breach, allowing action to be taken before any data is compromised.
Finance
The financial industry heavily relies on simulations to value and analyse complex instruments and to manage risks which are very compute-intensive. This forces financial institutions to make a trade-off between accuracy and speed. The Memory-driven computing environment enables faster calibration of complex financial models for quicker (hours to seconds) and more accurate projections over time.
Healthcare
Today’s medicine is based on treating the ‘average patient’ but, in reality, everyone is different and has different responses to treatments. Memory-driven computing will enable us to see patterns in new ways to predict disease and accurately diagnose it right away; it can run near-infinite permutations of complex datasets. A patient’s historical medical record can be compared to a million other records, offering a more precise treatment than available today.
In summary
Memory-driven computing is still in its early days but it’s not a fairy-tale. Arrival is imminent, and it is a potential game-changer. The individual technologies such as Photonics and Fabric are already being integrated into the development paths of current products to provide a step-function in their capabilities.
The real gains will come from understanding the full implications of the new architecture and changing the way that we program for it.
So if you’re experiencing concerns about how to handle and process current or future data, you could explore further how Memory-driven computing might provide a more powerful and cost-effective way to achieve your objectives.