Is improved market data quality assessment the key to increased trading profitability?

By Shimrit Or | 25 May 2016

Increasing trade profitability and ensuring compliance with ever evolving regulatory commitments are two of the primary goals currently driving equity trading businesses globally.  As trading becomes increasingly instantaneous and human involvement in the overall process diminishes, the risks associated with making trading decisions based on flawed market data, or for this data to be passed on to clients, can put both of these objectives in jeopardy.  

Some firms are lucky. They don’t experience a major issue where, for instance, a trading system unknowingly consumes incorrect, delayed or incomplete market data resulting in trading losses that run into the millions in a matter of minutes - however, others are not so fortunate.  Also, even if a firm escapes an incident on this scale, the periodic consumption of inaccurate data can still lead to aggregate losses of hundreds of thousands of dollars over just a few months.  Additionally, if a firm is passing flawed data onto clients, this can cause retention issues, with dissatisfied customers unlikely to place additional business with the firm.

From a regulatory perspective, whilst the exact requirements may differ from one jurisdiction to another, improving system controls continues to remain high on the global agenda.  Pre-trade risk controls increasingly focus on the need to prevent systems generating and sending erroneous orders.  Furthermore, limiting an incident’s impact on clients is key, and having the correct controls in place is central to this. As market data continuously drives the trading decisions made by automated systems, introducing measures that demonstrate that the data is being effectively managed is becoming increasingly important for many firms. 

Whilst basic monitoring of market data is commonplace, many firms take a network-focused approach, looking simply for activity and maybe sequence gaps.  In doing so, concentrating on the connection’s liveness rather than examining the actual data content of the feed. 

To put this into context, when looking at how market data is assessed it can be useful to consider market data using a highway analogy. With a highway you have a road on which cars and trucks, containing passengers and parcels, travel.  Most firms currently just assess the quality of the road (the market data channels and feeds). Looking at, for example:

• Whether the traffic is flowing freely - or in network terms, are the market data feeds suffering microbursts or sequence gaps?
• How the flow of traffic compares on different lanes – are the B feeds outperforming A feeds?
• And if the highway’s exits are congested – can the applications that should be consuming market data keep up with the volumes being received?

This style of monitoring often fails to deliver actionable, business insight. It’s the type of quality assessment that’s useful for technical teams in identifying problems at an infrastructure level; for example, a firm detecting microbursts may need to provision more bandwidth.  However, accurately correlating how these microbursts are impacting the actual trading, whether they are causing delayed or missed ticks, and as a consequence mis-pricing is difficult. It’s therefore challenging for a firm to decide what is the appropriate business response to control any financial impact.

What’s really needed is this style of analysis combined with the ability to identify problems with the data itself - the feed’s content. After all, this is what is being used by algorithms and trading systems to determine whether or not to act. For instance, overall a feed could appear absolutely fine, but it’s only by looking at the actual data that you would identify that a specific instrument isn’t ticking or it’s ticking as you’d expect on the bid side, but not on the offer side. This could indicate an imbalance in the market, a problem happening right now or one that’s just about to take place. As could the discovery of a symbol experiencing abnormal price movements.

In a consolidated feed, it may be that due to an upstream technical glitch that a whole market goes missing. The feed looks alive but there are potentially serious loss-making consequences if such a problem is not identified and flagged immediately.

It’s only by looking at the actual decoded content details: the people and packages inside the cars and trucks on our metaphorical highway, that a firm can gain these actionable insights. Whether a firm is using the market data to make its own trading decisions or forwarding the data onto clients, being alerted to quality issues can deliver significant business value. It can enable the firm to trade more profitably and confidently control the quality of data it publishes.

Take for example a firm with high exposure on a given market, or in an individual instrument, who is alerted to the emergence of suspect data. With this real-time insight the firm can make fast and effective decisions. They could, for instance, inform all of the applications subscribing to the impacted feed, so either the business or the application itself can choose to stop trading or switch to an alternative feed, therefore significantly limiting the financial loss that they might have otherwise incurred. Alternatively, should degradation in the quality of data being passed onto clients be identified, the firm can proactively notify all affected clients, thus helping them limit the impact on their business. Being able to react and manage an issue before a client approaches the firm with the problem can make a big difference to the client’s experience and ultimately retention levels.

One of the challenges firms previously faced, in conducting this style of content assessment, is that to identify abnormal behavior it is first necessary to determine what’s normal. When a firm is consuming potentially thousands of different instruments, manually configuring the data’s “normal characteristics” and maintaining this information on an on-going basis involves significant resource effort. However, recent developments in machine-learning technology have automated this activity enabling advanced market data quality tools to automatically determine, adapt and rebase what normal looks like for all instruments being assessed.  In doing so, profiling:

• How regularly the particular instrument should be ticking
• What constitutes normal price movements
• How measures vary at different times during the trading day and across days
• How “normal” behaviour changes in respect of specific market calendar events

In summary, it’s a combination of looking at both the channel and the data’s deep content that provides firms with the actionable insights necessary to limit the impact of data quality issues on trading profitability levels.  Whilst channel level insight enables the firm to make improvements to increase performance longer-term, the firm needs to be notified of content issues immediately.  Learning you were trading using stale data, or passing stale data onto a client, 10 minutes after the fact isn’t good enough.  Delays risk the rapid accumulation of significant losses.  For the firms that get this right big gains can be made.  Trading losses can be reduced and by enhancing system controls, so can the frequency and severity of the issues that risk regulatory fines – all of which can have a substantial impact on a firm’s bottom line.

By Shimrit Or, Senior Professional Services Consultant, Velocimetrics.

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development