The Value of a Millisecond: Finding the Optimal Speed of a Trading Infrastructure
Regulatory changes have attracted fully-electronic upstarts to all markets. Each of them has brought innovation and differentiation to the technological race for a competitive edge. Participants in these venues have responded with equal enthusiasm as they develop better mouse traps to take advantage of the opportunities being presented. The result is a reduction of the human factor in trade execution at an accelerated pace.
The resulting theme is obvious: time to market – literally.
The benchmarks for speed have reduced latency in exponential terms: open outcry was measured in seconds, whereas electronic venues now boast matching capabilities in microseconds. The stakes are high. Traditional stock exchanges are employing similar business models employed by the newer execution venues, which depend on the ability to quickly receive, aggregate, manage and match orders for a series of securities. Thus, more than half of all exchange revenues are exposed to latency risk today, up from 22% in 2003.
The US stock markets bear the greatest latency pain points in this regard. Trading revenues are directly tied to technological innovation as the search for speed brings paramount importance to the science of managing the trading infrastructure end-to-end. Middleware, the software tool for managing these enterprises, has long been taken for granted; but now, messaging is synonymous with speed. Messaging protocols are being upgraded to deal with the ever-increasing rate and size of market data messages.
For US equity electronic trading brokerage, handling the speed of the market is of critical importance because latency impedes a broker’s ability to provide best execution. In 2008, 16% of all US institutional equity commissions are exposed to latency risk, totaling $2B in revenue. As in the Indy 500, the value of time for a trading desk is decidedly non-linear. TABB Group estimates that if a broker’s electronic trading platform is 5 milliseconds behind the competition, it could lose at least 1% of its flow; that’s $4 million in revenues per millisecond. Up to 10 milliseconds of latency could result in a 10% drop in revenues. From there it gets worse. If a broker is 100 milliseconds slower than the fastest broker, it may as well shut down its FIX engine and become a floor broker.
Message-oriented middleware has evolved with these demands but there are dilemmas for all concerned. The core system was built long ago, before speed was a critical factor. In-house applications were developed upon this core as the intellectual property that separated winners from losers. But while the analytical prowess by participants continued to progress, the importance of speed has made managing trading technology in a holistic manner all the more complex. Economics meets aerodynamics.
Overall IT spending on messaging infrastructure is expected to remain flat, approximately $1.8 billion through 2010. These expenditures account for a reduction in maintaining legacy investments while shifting resources toward improving latency-related management. As a result, low-latency expenditures will almost double, from under $100 million currently to about $170 million by 2010.
When seeking to reduce latency, the primary areas of focus are the networks that carry the messages, the applications that consume the messages, and the hardware that processes the messages. However, firms must be careful not to focus on just one area but rather approach the system in its totality. Otherwise, speeding up each of the parts will not necessarily speed up the sum. In fact, the problems may worsen. What used to be a simple throughput problem has now been compounded by issues such as jitter and persistence. These equivalents of turbulence require a demanding feature set – depth and breadth – from the messaging middleware. While proprietary applications remain sacred, these new messaging tools are exposing the in-house developments as a major source of the latency problem.
All of these innovations are setting new achievement benchmarks for dependent components throughout the system. As physicists broaden their interest in finance, it will be no surprise to see software and hardware advancements that present today’s measure of fast markets as a snail’s pace.
The TABB Group research note on the Value of a Millisecond: Finding the Optimal Speed of a Trading Infrastructure
This research note investigates how various functions within the trading world, from execution venues to brokers, are now dependent on the speed of the trading infrastructure. We look at why, from an enterprise infrastructure perspective, the core challenge resides within messaging and its related touch points – which we generically refer to as middleware.
We examine how middleware connects applications and passes data between them, thereby playing the role of infrastructure plumbing. We focus on recent advancements within middleware, including the ability to tune the interaction of the applications to find an optimal balance between the three major causes of latency: throughput, persistence and jitter. Each of these causes is discussed and the negative impact each has on business functions.