In today’s world, there is an explosion of the amount and types of data available to the average investor.
Compared to 50 years ago, there is a greater need to process large volumes of data and convert them into consistently good insights at a reasonable speed to beat the markets.
Data, information, insight, and action are guiding principles for quantitative investors, reminders of what we should be spending our time on.
The flow of these concepts matters – because raw data can lead to informed decisions.
This approach forms the basis of our proprietary in-house data systems and calculation engines, and its success is evidenced in our portfolios.
Dimensionality of data
Investing on a global scale requires the processing of a lot of data.
This can be trade data, fundamental data such as company financials; metadata such as sector, industry or country classifications; news and media data; and novel sources of data such as satellite images of shopping mall parking lots or credit card receipts.
Different analyses and back tests need different levels and types of data.
The dimensions of the data can change based on the investible universe and multiple funds could require overlapping dimensions of data.
Ultimately, raw data is usually unusable and requires processing.
Distilling the data
In quantitative investment strategies, to get from data to information, we need to process the data.
Using the momentum style as an example, if we want to calculate momentum for global stocks, it requires thousands of data points per day.
We may need to compare the prices of stocks today to the prices three months ago, for example.
In a global investing universe, this means the same calculation is done on thousands of stocks per day.
The sheer volume of data handled in these calculations means that simple tools like Microsoft Excel won’t cut it.
We need to use computer programming languages where we can automate the calculations and perform them quickly.
The database that stores the raw data can also store the calculated data, allowing for single storage with less chance of error.
Therefore, the idea of a single source of truth also means that data is not being transferred between team members using files that can be easily modified.
This way, multiple calculations can be performed as soon as data is available, and the calculated data can be made available on demand.
Data over dogma
Gaining insights from data is one of the most important steps we can take as investment professionals.
Formulating deep, meaningful insights from data helps us understand market conditions and position our portfolios better.
An example of this is our work on identifying a global cycle indicator, which is automatically updated to provide insights for portfolio positioning based on the current economic phase that the indicator points to.
These insights are all continuously tested and updated automatically, with the system operating like a growing knowledge bank of innovative research and insights.
Creating cutting edge proprietary technology
Given the scope of data that investors need to process, a proprietary system of cutting-edge technology is necessary for analysts and portfolio managers to gain a competitive advantage.
Our investment team uses bespoke databases, robust analytics engines and a front-end, which can be accessed by any team member from any device, to give them the best chance to make the correct decisions regarding investment of our clients’ capital.
Portfolio positioning decisions can be taken after distilling large amounts of information in a fast and accurate manner, with the systems able to be modified in any way necessary for producing market-beating returns in our portfolios.
The idea of data-driven decisions comes to life through this approach and the only limitation is our own imagination.
Data-driven investing in practice
It is one thing talking about the principles of turning data into insights, but what about the prospect of turning data into performance?
Our Old Mutual Global Managed Alpha Strategy uses a proprietary systematic model to evaluate six broad market drivers or factor buckets – value, growth, quality, momentum, size and volatility (risk).
This process is style agnostic, meaning that the fund isn’t married to one particular style, rather tilting toward or away from varying factors depending on the forecasted return drivers.
Our data analysis has shown us how the factors driving the market change through time.
This highlights the need for a dynamic approach to factor investing.
It is also worth emphasising that the estimation of factor returns across all factors and all companies in our universe is done simultaneously, which points to the agility and speed of the data analysis process.
Over the fund’s history since its inception in 2017, we have seen a solid track record, with gross composite returns of 16.7% over one year versus the Benchmark at 14%, 9.3% over three years versus the Benchmark of 7.5%, and 9.1% over five years versus the Benchmark at 7.6%.
Since inception (December 2017), the fund has returned 10% versus a Benchmark return of 8.4% over the same period – a testament to the power of big data processing and building consolidated insights out of multiple data points.