What is the future of in-memory computing?

By Nikita Ivanov | 5 September 2017

Once considered too expensive for all but high performance/high value applications, in-memory computing (IMC) is now moving into the mainstream. Almost every organisation today has high performance application requirements and, according to Gartner, IMC technologies provide a cost-effective path to creating modern, scalable, real-time applications. The analyst firm predicts that the market for in-memory data grids, just one segment of the IMC industry, will reach $1 billion by 2018. Enterprises that want to take advantage of IMC and maximize the benefit they derive from their deployments must understand what the technology has to offer today, as well as where it is heading.

In-Memory Computing: Now and Tomorrow,” a new white paper from GridGain Systems, will help you understand how in-memory computing platforms such as Apache® Ignite™ and GridGain can address:

a) Use cases which are driven by digital transformation initiatives

b) The demands of IoT and machine learning for real-time processing

c) The challenges of petabyte-scale in-memory computing applications

d) How to leverage the latest developments in RAM and storage technologies such as non-volatile memory

The maturing of the IMC market has been propelled by two key factors. First, the cost of memory has dropped roughly 30 percent per year since the 1960s, making it affordable for a much wider range of use cases. This affordability has upended a storage paradigm that held sway for more than 50 years: Organisations can now consider a memory-first architecture strategy instead of always thinking disk-first. The lower cost of memory has also inspired vendors to deliver more capable IMC solutions that are easier to deploy and use, and that are customisable to a wide range of potential use cases.

The needs of potential use cases are the second compelling factor behind the rise of IMC. Businesses are dealing with a large increase in the amount of data they produce, collect and consume. Customer and employee activity and social media have turned data streams into floods, and we’ve only just begun. The onslaught of data from a growing number of smart devices connected to the Internet of Things (IoT) will dwarf what we are currently experiencing. From smart homes to connected vehicles to smart data grids, each device may generate relatively very little data, but the constant flow of information from millions and eventually billions of these devices has to be aggregated, processed and analysed. And the thirst for more data never seems to be quenched. Consider that a train manufacturer that is currently employing about 400 sensors per train will likely increase that number to 4,000 over the next five years – with sensor readings captured every millisecond.

Further, merely ingesting and analysing this data isn’t sufficient. It must be ingested and analysed fast enough to keep up with today’s expectations for real-time responses, making the challenge that much more difficult. As consumers, we expect webpages to load in three or four seconds or we abandon the site. As business users, we now expect similar performance from our cloud-based ERP, CRM, CMS and HR systems, whether we are working from the office, at home, or on the road.

These heightened performance expectations are forcing organisations to adapt. A recent Computer Business Review Magazine survey of 200 UK-based CIOs found that 98 percent of the CIOs believe a significant gap exists between the performance level that consumers expect and what IT can deliver. These results echo similar concerns among CIOs in the U.S.

IMC strategies are increasingly seen as the only way to cost-effectively meet the challenge of extreme speed and scale. The following strategies are all being implemented today:

a) In-memory data grids can be inserted between an existing application and its data layer to accelerate transaction processing without ripping-and-replacing the existing database. Maintaining and processing data in RAM plus parallel processing the data cached in the RAM of the multiple clustered servers can accelerate processing 1,000 times or more compared to disk-based approaches.

b) Operational databases have increasingly added in-memory options to speed up transaction processing and support analytical insight.

c) Platforms for event and stream processing are using in-memory technology to rapidly ingest, analyse and filter data on the fly – before the data is sent elsewhere for further analysis.

d) Many analytic databases and data warehouses rely on in-memory technology to quickly run complicated queries on large data sets. In addition, some companies are using in-memory technology on top of Hadoop (which is more batch-oriented than interactive), so users can analyse data from data lakes (made up of Hadoop clusters) more rapidly.

The Future of IMC

Over the next decade, in-memory computing will see several transformative developments, including:

IMC as a system of record – With the increased affordability and capabilities of IMC technology, many businesses are beginning to use in-memory computing platforms as their authoritative data source for business-critical records – with legacy disk-based databases primarily used as a persistence layer. This trend will accelerate as more vendors provide robust support for ANSI-99 SQL ACID transaction guarantees.

Robust support for in-memory, distributed SQL – About a decade ago, many in the data-processing community thought SQL databases would be replaced by NoSQL databases and product-specific query languages. It hasn’t happened. SQL is just too popular. Millions of developers and business users are familiar with it and it offers vital search performance and querying advantages. In response to this industry reality, IMC vendors are increasingly adding robust support for SQL, including the ability to perform distributed processing of ANSI-99 SQL-compatible queries across data stored either in memory or on disk.

Artificial Intelligence – The growing demand for applications leveraging machine learning (ML) and deep learning (DL) on a very large scale will increasingly push IMC vendors to support a wider array of artificial intelligence (AI) use cases. Machine learning and deep learning on large, sparse data sets requires a data management system that can store terabytes of data and perform fast parallel computations – tasks for which IMC platforms are ideally suited. As vendors roll out support for these use cases, organizations will be able to more rapidly deliver and scale exciting new applications, such as those involving “fast learning.”

Non-volatile memory (NVM) as the preferred storage method – NVM is one of the most exciting IMC developments and will have an enormous impact on data storage over the next decade. Unlike DRAM (Dynamic Random Access Memory), which loses all data in the absence of a power source, NVM retains its data. This eliminates the need for software-based fault-tolerance for IMC platforms. Today’s NVM solutions can be used for flash or flash-like devices and for RAM that inserts into the sockets of a computer’s motherboard. A decade from now, NVM will likely be the dominant data storage model, ushering in the age of 100 percent IMC infrastructures – with all the speed and scale benefits this implies.

Hybrid storage models for very large datasets – Even as IMC-centric infrastructures become possible over the next decade, it will take longer for this to become affordable for extremely large datasets involving petabytes of data. In the meantime, vendors are innovating around hybrid storage models that provide a universal interface to all media – RAM, flash, disk, and eventually NVM – without users needing to know where the data resides. This strategy will enable organizations to easily optimize for cost and performance without changing data-access mechanisms.


For companies needing to cost-effectively increase the speed and scalability of their applications, the rapid evolution of in-memory computing is the most exciting technology trend of 2017. Rapid evolution, however, means it is vital to track the latest developments and understand how organisations are benefitting from them. For more information, read, “In-Memory Computing: Now and Tomorrow,” the newest white paper from GridGain Systems.