Romain Desbiolles and Khadra Mohamed take an inside look at the intelligent systems which drive algorithmic trading at finance powerhouses

The future of trading and beyond

“Goldman Sachs has already begun to automate currency trading, and has found consistently that four traders can be replaced by one computer”, notes Marty Chavez, the Goldman Sachs deputy chief financial officer and former chief information officer, at a Harvard conference on computer’s impact on economic activity. “Some 9,000 people, about one-third of Goldman’s staff, are computers.”

Algorithmic Trading: the what, who and why?

Algorithms (or Algos) are a set of instructions that drive profits within the trading world. The key selling point of algos is their ability to generate high profit turnover at a faster speed than human traders. This is possible through algorithmic trading which has a set of ways that guarantee profit turnover through the use of mathematical models, pricing and quantity rules beyond the capabilities of the everyday human trader.

Algorithmic software has been used previously by hedge funds, investment bankers, portfolio managers, etc. and continues to be used due to their higher accuracy and lack of emotional influences when making decisions.

There are some clear benefits of algo trading which include trades being executed consistently at the best price and timed correctly avoiding market crashes and price reductions, daily automated checks on the market conditions globally, and reduced risk of human error in stock trading.

Today, most algorithmic trading is HFT (high frequency trading) which seeks to profit off placing a large number of orders at rapid speeds across multiple markets and multiple decision parameters based on preprogrammed instructions. The impact of algo trading has become more prominent due to growing market instability and this is shared across the European and American markets.

The Rise and (brief) crash of Algorithmic Trading

Algo trading was sparked with the introduction of electronic communications from the 1980s onwards. In addition to this, the US underwent decimalisation which changed the market microstructure of shares as it enabled smaller differences between the bid and offer prices. This shift effectively ignited algorithmic trading. What cemented the use of algorithmic trading was the 1998 SEC (Securities & Exchange Commission) authorization of electronic exchanges in trading; this enabled High Frequency Trading to be utilised.

By the start of the millennium, HFT trades were being executed in a matter of seconds. By 2010 this shrank to mere milliseconds then microseconds and in 2012, it reached nanoseconds. This increased speed was a key selling point of algorithmic trading and by 2010, 56% of equity trades in the US were made by HFT.

However, on May 6th 2010, the Dow Jones plummeted 1000 points in a single day wiping $1 trillion off the market value. This crash was triggered by an algorithmic driven sale worth $4.1 billion. Although it did recover moments later, the damage was enough to warrant a change in perspective towards dependence and reliance on algorithmic trading.

Algorithmic trading accounts for 80% of trading in the US which has been criticized by De Blonay, fund manager at Jupiter Asset Management who argues that this results in reduced focus on earnings, outlooks and a tendency to cash in on short-term movements. This suggests algorithmic trading is both welcomed and under review due to the widespread financial damage it can cause.

Machine learning (how it works)

Machine learning, in particular, is of great interest to institutional trading firms. One major advantage machine learning has is its ability to leverage a virtually infinite amount of data while never being tired or emotional, which would theoretically make it’s performance better than that of a human trader.

But how do these softwares work exactly ? Since there are many types of machine learning we will focus on just one- neural networks.

source : IBMWhat are Neural Networks? | IBM

Simply put, a neural network takes input values, passes these through the hidden layers of “channels” which are essentially just multiplication coefficients connected to neurons which allow the data to pass to the next row if the data received passes a mathematical test. For instance, if the x value is superior to 0.5, say x=0.6, so the x value passes on the next row, gets multiplied by a new coefficient and then a new neuron tests it. This process can have an immense number of iterations, but the end of the network remains the same- there are output neurons, which give out the final values which can be binomial (1 or 0 signifying positive or negative), or can be of a more complex nature and allow to predict particular decimal values.

Now let’s take an example, if we want to teach the AI to add up numbers a and b to find a c value, we just have to give the neural network a few good rows of data with a, b, c values with c equals the sum of a and b. The channels will tune their weights and the neurons will act as filters such that the final output matches the results of the data to the best of its ability. With sufficient data, the bot can become perfectly accurate in making additions.

This very same principle is used in finance by giving large amounts of past data to a neural network and scaling it in such a way that the algorithm can spot, for instance, a price action pattern that says whether or not to buy. The neural network could be fed some past data on say, moving averages and the price change the day after the corresponding moving average data. One possibility is that the neural network would compare each moving average to the other and see if the shortest ones have higher values then the longer ones, indicating a positive trend, and therefore a potential buy opportunity. The final row of neurons would only activate if this criteria was true and a value of 1 would be inputted if the neuron was indeed activated, indicating a buy signal, and conversely, 0 if the neuron does not activate.

While this example may seem basic and quite fundamental, the benefit of neural networks lies in their ability to analyse vast amounts of data no human can match ever. This indicates that computers can see patterns humans simply can’t in the long run.