“Before too long we won’t be using the term ‘in-memory computing’, in-memory computing will just be called computing”: Abe Kleinfeld interview

By Alex Hammond | 9 March 2017

In an exclusive interview to bobsguide, the president and CEO of GridGain Systems explains why the speed and scale capabilities of in-memory computing will compel financial services to switch their data solutions.

Tell us about your career in fintech.

I’ve been in the industry for 38 years, I started out as a software engineer and then evolved into more customer-facing roles, eventually heading sales and marketing at companies, and ultimately running companies as president and CEO. In the past two-and-a-half decades, I’ve been fortunate to complete two IPOs and a couple of mergers and acquisitions.

During the course of my career, I’ve worked in both hardware and software companies, so I think I’ve developed a pretty good perspective of the tech industry.

After selling my last company in 2013, I thought I was retiring. But instead I discovered GridGain Systems. In talking to the CTO and founder, I realized that GridGain represented what I really love about computer science – GridGain had developed an in-memory computing platform that delivered blazing speed and unlimited scale, and would enable incredible capabilities that you previously only read about in science fiction novels.

So I was very happy to join the company and we’ve been doing very well since. Sales last year tripled and we have averaged triple-digit growth for the past four years. We’re doing particularly well in financial services, fintech and with technology companies that integrate our solution into their products for a wide variety of use cases.

What are the benefit of in-memory computing over current data systems?

There are two key benefits that in-memory computing brings: tremendous speed and virtually unlimited scale.

The speed comes from moving your data from the computer’s disk into RAM (memory), where that data can be access between 5,000 and a million times faster. The scale comes from distributing the data and computing workload across a virtually unlimited number of commodity servers, with each server adding additional RAM and processing power.

When you think about where computing is headed in the future, everything requires speed and scale. Big data analytics, fast transactions in financial services, web-scale SaaS and cloud computing where you have shared infrastructure serving millions or billions of users or things, AI and cognitive computing, real-time analysis of risk or trading positions, real-time fraud detection. These are just a few examples of where in-memory computing is necessary.

Do you think all of that growth in demand has plateaued, or will the surge in growth continue to be exponential?

It’s going to continue to grow at least at the current pace, but more likely it’s going to escalate.

To put it in very simplistic terms, this is because digital transformation – which all banks are now embracing – is about instrumenting everything, including customers, products, partners, and also everything inside the company itself. And those instruments are all consuming and producing data. Lots of it. The companies that figure out how to process that data in real-time and gain insights into it will become more competitive than those that don’t.

So it is going to be a race to process massive amounts of data, from as many sources as possible, to gain as broad a perspective as possible and make instant decisions based on that data to win business and serve a customer base.

And we as a society are at the tip of that iceberg. It is just a matter of time before, as consumers, we are all walking Class C networks where everything we carry and wear will be instrumented and provide feedback to someone. Being able to make decisions based on that information is going to change everything. When you walk into a shopping mall, it will reconfigure itself based on who you are and what your preferences are.

So we’re at the very early stages of what massive increases in data processing speed and scale will be able to deliver.

Are speed and scale the only criteria that financial services should consider when they are looking to upgrade their data systems?

Speed and scale are like food and water, they serve as fuel for the modern data ecosystem.

Once you have an in-memory data platform, it enables a broad range of capabilities that are becoming critically important to financial services. Of foremost importance is the ability to innovate faster and at a lower cost.

If you look at virtually any major corporation today, be it banking or financial services or any other industry, it will most likely have an operational system and a data warehouse. Companies are highly protective of their operational systems because that is where business transactions are happening and they don’t want to do anything that could slow or disrupt these systems. And so to avoid that, they take snapshots of the operational system periodically and save that snapshot to a data warehouse where they can run analytics to help make business decisions. But the data in their data warehouse is typically at least a day old, and it could be a week or a month old. So these companies are making decisions on how they are going to run their businesses based on old information in their data warehouse.

The concept of a separate data warehouse was previously necessary because traditional computing architectures weren’t fast enough to run analytics on the operational database – the operational system didn’t have the speed and scale to run transactions and analytics on the same data. And this separation of operations and analytics came at a huge cost. It requires separate organizations and technology stacks.

With an in-memory data platform the process is so fast and scalable that companies can easily keep all their data in one unified data store. From a single data platform, financial services firms can have their operational systems and analytics systems interacting on the same data, making decisions based on current data, with those decisions in turn instantly influencing operations. So companies can have a faster, more flexible system, and compete more effectively with an IT architecture that is substantially more cost effective than one with separate operational systems and data warehouses.

If you look at most IT organizations today, they have two key goals. The first goal is to drive innovation, and the second is to reduce costs every year. In-memory computing is going to have a significant positive impact on both goals. It’s not only going to drive speed and scale improvements, it’s going to transform what companies are capable of and how they build IT in the future. And when I say the future I’m not talking about 20 years from now, I’m talking about two to three years from now. The move towards in-memory computing is happening that quickly.

We have one customer, Sberbank, the largest bank in Russia, that has made a huge commitment to in-memory computing. Its goal is to not touch disk, so it is rolling out 1000 servers across 22 countries and 11 time zones. Everything is going to be done in-memory, including all of its banking applications, with virtually zero latency. It’s going to enable full hybrid transactional/analytical processing, i.e., a move from analytics and transactions being separated to analytics and transactions being unified and conducted in-memory. That move will give Sberbank instant situational awareness. It will enable product innovation like you’ve never seen. And the bank is well into that process. That project is expected to be completed by end of 2018. All of their banking systems for 130 million customers across 22 countries will be in-memory.

How long will it take for other banks globally to follow Sberbank’s example in this area?

We work with a lot of financial institutions, particularly banks, and they are all moving in this direction. Some will move faster than others, I would say Sberbank is at the forefront of that innovation, but every other bank is well aware of the technology and is moving very quickly. I think you are going to see this transformation flow throughout the industry within the next two to five years because banking is such a competitive industry.

Is the shift to mobile and an omni-channel approach to banking going to increase the speed that banks adopt in-memory computing?

In-memory computing makes mobile banking possible, it can’t be done properly without it.

Before mobile devices appeared, the number of customers accessing systems was dramatically lower and the number of transactions banks had to support was also far lower. With mobile devices, customers can bank at any time of day, from any location. Consumers access their banking systems more often as a result, driving the number of transactions per second that banks have to support 1000-fold. There is no way to do that with traditional computing, it’s too slow and doesn’t scale. It’s futile to even try, because mobile is just the tip of the iceberg.

Bank customers will have many more ways of accessing their banking information in the future than just their mobile device. Everything is going to be connected to their account. There will be lot of transactions happening beyond what we currently define as “mobile.” So I think you’re going to see the number of transactions per second that banks have to support increasing, and of course regulation and fraud is going to play a huge role as well.

When there are more transactions happening more quickly, the ability to have humans interacting with and checking the system becomes impossible, so financial services will need better machine learning and artificial intelligence (AI). Banks will need to have really fast fraud detection and they will need to perform massive compute jobs in real-time in order to meet regulatory requirements. There is just so much that happens behind the scenes for every transaction that to do them without distributed, in-memory computing is impossible.

It’s safe to say that before too long we won’t even be using the term “in-memory computing”, in-memory computing will just be called “computing”.