To err is human, but to really foul things up, you need a computer. So states the remarkably insightful Murphy’s Law. And nowhere else does this ring truer than in our financial workplace. After all, it is the financial sector that drove the rapid progress in the computing industry — which is why the first computing giant had the word “business” in its name.
The financial industry keeps up with the developments in the computer industry for one simple reason. Stronger computers and smarter programs mean more money — a concept we readily grasp. As we use the latest and greatest in computer technology and pour money into it, we fuel further developments in the computing field. In other words, not only did we start the fire, we actively fan it as well. But it is not a bad fire; the positive feedback loop that we helped set up has served both the industries well.
This inter-dependency, healthy as it is, gives us nightmarish visions of perfect storms and dire consequences. Computers being the perfect tools for completely fouling things up, our troubling nightmares are more justified than we care to admit.
Models vs. Systems
Paraphrasing a deadly argument that some gun aficionados make, I will defend our addiction to information technology. Computers don’t foul things up; people do.
Mind you, I am not implying that we always mess it up when we deploy computers. But at times, we try to massage our existing processes into their computerised counterparts, creating multiple points of failure. The right approach, instead, is often to redesign the processes so that they can take advantage of the technology. But it is easier said than done. To see why, we have to look beyond systems and processes and focus on the human factors.
In a financial institution, we are in the business of making money. We fine-tune our reward structure in such a way that our core business (of making money, that is) runs as smoothly as possible. Smooth operation relies on strict adherence to processes and the underlying policies they implement. In this rigid structure, there is little room for visionary innovation.
This structural lack of incentive to innovate results in staff hurrying through a new system rollout or a process re-engineering. They have neither the luxury of time nor the freedom to slack off in the dreaded “business-as-usual” to do a thorough job of such “non-essential” things.
Besides, there is seldom any unused human resource to deploy in studying and improving processes so that they can better exploit technology. People who do it need to have multi-facetted capabilities (business and computing, for instance). Being costly, they are much more optimally deployed in the core business of making more money.
Think about it, when is the last time you (or someone you know) got hired to revamp a system and the associated processes? The closest you get is when someone is hired to duplicate a system that is already known to work better elsewhere.
The lack of incentive results in a dearth of thought and care invested in the optimal use of technology. Suboptimal systems (which do one thing well at the cost of everything else) abound in our workplace. In time, we will reach a point where we have to bite the bullet and redesign these systems. When redesigning a system, we have to think about all the processes involved. And we have to think about the system while designing or redesigning processes. This cyclic dependence is the theme of this article.
Systems do not figure in a quant’s immediate concern. What concerns us more is our strongest value-add, namely mathematical modelling. In order to come up with an optimal deployment strategy for models, however, we need to pay attention to operational issues like trade workflow.
I was talking to one of our top traders the other day, and he mentioned that a quant, no matter how smart, is useless unless his work can be deployed effectively and in a timely manner. A quant typically delivers his work as a C++ program. In a rapid deployment scenario, his program will have to plug directly into a system that will manage trade booking, risk measurements, operations and settlement. The need for rapid deployment makes it essential for the quants to understand the trade lifecycle and business operations.
Life of a Trade
Once a quant figures out how to price a new product, his work is basically done. After coaxing that stochastic integral into a pricing formula (failing which, a Crank-Nicholson or Monte Carlo), the quant writes up a program and moves on to the next challenge.
It is when the trading desk picks up the pricing spreadsheet and books the first trade into the system that the fun begins. Then the trade takes on a life of its own, sneaking through various departments and systems, showing different strokes to different folks. This adventurous biography of the trade is depicted in Figure 1 in its simplified form.
At the inception stage, a trade is conceptualized by the Front Office folks (sales, structuring, trading desk – shown in yellow ovals in the figure). They study the market need and potential, and assess the trade viability. Once they see and grab a market opportunity, a trade is born.
Even with the best of quant models, a trade cannot be priced without market data, such as prices, volatilities, rates and correlations and so on. The validity of the market data is ensured by Product Control or Market Risk people. The data management group also needs to work closely with Information Technology (IT) to ensure live data feeds.
The trade first goes for a counterparty credit control (the pink bubbles). The credit controllers ask questions like: if we go ahead with the deal, how much will the counterparty end up owing us? Does the counterparty have enough credit left to engage in this deal? Since the credit exposure changes during the life cycle of the trade, this is a minor quant calculation on its own.
In principle, the Front Office can do the deal only after the credit control approves of it. Credit Risk folks use historical data, internal and external credit rating systems, and their own quantitative modelling team to come up with counterparty credit limits and maximum per trade and netted exposures.
Right after the trade is booked, it goes through some control checks by the Middle Office. These fine people verify the trade details, validate the initial pricing, apply some reasonable reserves against the insane profit claims of the Front Office, and come up with a simple yea or nay to the trade as it is booked. If they say yes, the trade is considered validated and active. If not, the trade goes back to the desk for modifications.
After these inception activities, trades go through their daily processing. In addition to the daily (or intra-day) hedge rebalancing in the Front Office, the Market Risk Management folks mark their books to market. They also take care of compliance reporting to regulatory bodies, as well as risk reporting to the upper management — a process that has far-reaching consequences.
The Risk Management folks, whose work is never done as Tracy Chapman would say, also perform scenario, stress-test and historical Value at Risk (VaR) computations. In stress-tests, they apply a drastic market movement of the kind that took place in the past (like the Asian currency crisis or 9/11) to the current market data and estimate the movement in the bank’s book. In historical VaR, they apply the market movements in the immediate past (typically last year) and figure out the 99 percentile (or some such pre-determined number) worst loss scenario. Such analysis is of enormous importance to the senior management and in regulatory and compliance reporting. In Figure 1, the activities of the Risk Management folks are depicted in blue bubbles.
In their attempts to rein in the ebullient traders, the Risk Management folks come across in their adversarial worst. But we have to remind ourselves that the trading and control processes are designed that way. It is the constant conflict between the risk takers (Front Office) and the risk controllers (Risk Management) that implements the risk appetite of the bank as decided by the upper management.
Another group that crunches the trade numbers every day from a slightly different perspective are the Product Control folks, shown in green in Figure 1. They worry about the daily profit and loss (P/L) movements both at trade and portfolio level. They also modulate the profit claims by the Front Office through a reserving mechanism and come up with the so called unrealized P/L.
This P/L, unrealized as it is, has a direct impact on the compensation and incentive structure of Front Office in the short run. Hence the perennial tussle over the reserve levels. In the long term, however, the trade gets settled and the P/L becomes realized and nobody argues over it. Once the trade is in the maturity phase, it is Finance that worries about statistics and cash flows. Their big picture view ends up in annual reports and stake holders meetings, and influences everything from our bonus to the CEO’s new Gulfstream.
Trades are not static entities. During the course of their life, they evolve. Their evolution is typically handled by Middle Office people (grey bubbles) who worry about trade modifications, fixings, knock-ins, knock-outs etc. The exact name given to this business unit (and indeed other units described above) depends on the financial institution we work in, but the trade flow is roughly the same.
The trade flow that I described so far should ring alarm bells in a quant heart. Where are the quants in this value chain? Well, they are hidden in a couple of places. Some of them find home in the Market Risk Management, validating pricing models. Some others may live in Credit Risk, estimating peak exposures, figuring out rating schemes and minimising capital charges.
Most important of all, they find their place before a trade is ever booked. Quants teach their home banks how to price products. A financial institution cannot warehouse the risk associated with a trade unless it knows how much the product in question is worth. It is in this crucial sense that model quants drive the business.
In a financial marketplace that is increasingly hungry for customized structures and solutions, the role of the quants has become almost unbearably vital. Along with the need for innovative models comes the imperative of robust platforms to launch them in a timely fashion to capture transient market opportunities.
In our better investment banks, such platforms are built in-house. This trend towards self-reliance is not hard to understand. If we use a generic trading platform from a vendor, it may work well for established (read vanilla) products. It may handle the established processes (read compliance, reporting, settlements, audit trails etc.) well. But what do we do when we need a hitherto unknown structure priced? We could ask the vendor to develop it. But then, they will take a long time to respond. And, when they finally do, they will sell it to all our competitors, or charge us an arm and a leg for exclusivity thereby eradicating any associated profit potential.
Once a vended solution is off the table, we are left with the more exciting option of developing in-house system. It is when we design an in-house system that we need to appreciate the big picture. We will need to understand the whole trade flow through the different business units and processes as well as the associated trade perspectives.
Trade Perspectives
The perspective that is most common these days is trade-centric. In this view, trades are the primary objects, which is why conventional trading systems keep track of them. Put bunch of trades together, you get a portfolio. Put a few portfolios together, you have a book. The whole Global Markets is merely a collection of books. This paradigm has worked well and is probably the best compromise between different possible views.
But the trade-centric perspective is only a compromise. The activities of the trading floor can be viewed from different angles. Each view has its role in the bigger scheme of things in the bank. Quants, for instance, are model-centric. They try to find commonality between various products in terms of the underlying mathematics. If they can reuse their models from one product to another, potentially across asset classes, they minimize the effort required of them. Remember how Merton views the whole world as options! I listened to him in amazement once when he explained the Asian currency crisis as originating from the risk profile of compound options — the bank guarantees to corporate clients being put options, government guarantees to banks being put options on put options.
Unlike quants who develop pricing models, quantitative developers tend to be product-centric. To them, it doesn’t matter too much even if two different products use very similar models. They may still have to write separate code for them depending on the infrastructure, market data, conventions etc.
Traders see their world from the asset class angle. Typically associated with a particular trading desks based on asset classes, their favourite view cuts across models and products. To traders, all products and models are merely tools to making profit.
IT folks view the trading world from a completely different perspective. Theirs is a system-centric view, where the same product using the same model appearing in two different systems is basically two different beasts. This view is not particularly appreciated by traders, quants or quant developers.
One view that all of us appreciate is the view of the senior management, which is narrowly focussed on the bottom line. The big bosses can prioritise things (whether products, asset classes or systems) in terms of the money they bring to the shareholders. Models and trades are typically not visible from their view — unless, of course, rogue traders lose a lot of money on a particular product or by using a particular model. Or, somewhat less likely, they make huge profits using the same tricks.
When the trade reaches the Market Risk folks, there is a subtle change in the perspective from a trade-level view to a portfolio or book level view. Though mathematically trivial (after all, the difference is only a matter of aggregation), this change has implications in the system design. Trading systems have to maintain a robust hierarchical portfolio structure so that various dicing and slicing as required in the later stages of the trade lifecycle can be handled with natural ease.
The busy folks in the Middle Office (who take care of trade validations and modifications) are obsessed with trade queues. They have a validation queue, market operation queue etc. Again, the management of queues using status flags is something we have to keep in mind while designing an in-house system.
When it comes to Finance and their notions of cost centres, the trade is pretty much out of the booking system. Still, they manage trading desks and asset classes cost centres. Any trading platform we design has to provide adequate hooks in the system to respond to their specific requirements as well.
Quants and the Big Picture
Most quants, especially at junior levels, despise the Big Picture. They think of it as a distraction from their real work of marrying stochastic calculus to C++. Changing that mindset to some degree is the hidden agenda behind this column.
As my trader friends will agree, the best model in the world is worthless unless it can be deployed. Deployment is the fast track to the big picture — no point denying it. Besides, in an increasingly interconnected world where a crazy Frenchman’s actions instantly affect our bonus, what is the use of denying the existence of the big picture in our nook of the woods? Instead, let’s take advantage of the big picture to empower ourselves. Let’s bite the bullet and sit through a “Big Picture 101.”
When we change our narrow, albeit effective, focus on the work at hand to an understanding of our role and value in the organization, we will see the potential points of failure of the systems and processes. We will be prepared with possible solutions to the nightmarish havoc that computerized processes can wreak. And we will sleep easier.