News & Events

Managing Fat Tail Risk in Quant Manager Selection

Platypus Capital Management is a systematic quantitative money management firm. Our people have worked on both sides of the quant fence: as managers and also as hedge fund allocators. The knowledge we have gained in one incarnation has clearly informed how we go about our tasks in the other. We therefore have some well-formed views as to the sorts of factors which are likely to give high or low levels of confidence in any particular quantitative investment program. We would like to share some of our experiences with you and invite your comments. It is our view that the chances of a quant programme failing are very high unless certain key steps are taken.

The objective of this publication, then, is to proffer some insights which potential investors in quantitative managers might use to help avoid some of the pitfalls with this style of analysis. We hope that by doing this we can help contribute to greater understanding and acceptance of quantitative approaches to investment.

Some Problem Areas with Quant Manager Due Diligence

We start by examining some of the more common hurdles in this area of quant manager selection.

Issue #1: Reliance on live trading only

When you buy a quant programme you are buying the ability of the manager to convert the results in the simulations into live performance with a similar profile. A programme may contain serious errors which can persist undetected for long periods in benign market conditions. It is not necessarily the case that a long track record verifies the robustness of a quant model, yet this seems to be the dominant view in the investor world. The worst case scenario (yet one that frequently occurs) is for an allocator to continue to invest money with a quant manager who then fails due to model malfunction or mis-specification.

Key areas to be concerned about are:

  • That the live trading is being done the same way as the historically simulated trading or, if there are any deviations, these can be easily and logically explained and are not likely to have significant effects.
  • That the historical simulations were themselves done rigorously and that the robustness of the models has been thoroughly tested and can be intelligibly explained.

Without the second point (on which more is discussed later) we would view the investment programme as very high risk.

Issue #2: The risk of “reputational blindness”

Our experience, coming from both the fund of funds and the manager worlds, is that it is sometimes easy to be intimidated by the academic qualifications, employment history or the company reputation of the manager. When this happens, there is a greater degree of acceptance of the manager’s explanations and a greater reluctance to investigate the investment models deeply. You may call this the “Long Term Capital Management Effect” and in our experience it is very real. There is no substitute for rolling up your sleeves and understanding the models and the testing methodology behind them, and in most cases this will require you to suspend all awe and ask the “dumb questions” that sometimes appear so embarrassing. It is sometimes surprising what surfaces when you take this approach.

Issue #3: Do more factors mean greater robustness?

We have no a priori bias for or against multi-factor models AS LONG AS they are designed and tested in a logical and robust manner. It is our collective experience, however, that:

  • The more factors within a model then the more difficult it is for the allocator to truly understand all of the risks in the complete process.
  • The more factors, and the more parameters, the greater the risk of curve fitting both the parameters and the weights attaching to the factors.
  • A high number of factors and parameters, regardless of their apparent plausibility, should more correctly be viewed as INCREASING the risks of the model rather than decreasing the risks, with exceptions to this principle existing in very rare circumstances. Thus the degree of scepticism, and the acceptance hurdles, should be higher with models containing a larger number of factors compared with those containing a smaller number.

We understand that this is not the conventional wisdom, but our own view internally (and this guides our own model development) is: “If it doesn’t work simply, then it simply doesn’t work.”

Issue #4: Don’t overlook the nature of the data used in the simulations

It is important that the data used for the development and testing of a quant model be appropriate to the needs of the programme. In particular, an allocator needs to be satisfied that:

  • The data used is as free of biases (eg survivorship bias, reporting bias) as is possible.
  • Gaps or patches to the data are acknowledges and the use of such data justified.
  • There are sufficient data points so as to enable statistically robust conclusions to be drawn.

It is very important that the models have been tested over a wide range of market conditions using in-sample or out-of-sample data (ideally both). The importance of linking the quality of the data in the simulations with the likely outcomes in live trading is seldom appreciated.

What REALLY Matters in Evaluating a Quant Fund?

Having addressed some of the common problems in the analysis of quant managers, it is also possible to list several factors which, from our experience, are vital in separating those managers likely to fail from those generally likely to replicate their historical simulations.

Success Factor #1: Require the manager to justify the need for complexity

For us, the risks of model failure vary in a positive and multiplicative relationship with the number of factors and the number of parameters. We feel that you should start from the presumption that “simpler is better” and require the manager to justify why they have added complexity. Further, ask for statistical justification to show how the complexity has added to performance and evidence of the robustness of that justification, including the testing carried out on each factor and parameter separately.

Success Factor #2: Check for in-principle versus optimised parameter values

If there is no underlying logical rationale behind the selection of key parameters then it is likely that those parameters may have been selected through data mining, ie re-running the simulations until a profitable (even the most profitable) set of parameters becomes apparent.

This is dangerous, indeed mostly terminal, for a single factor model, and the dangers increase if this process has been repeated across several factors and multiple parameters. This approach indicates that the model will fit the past very well but that any success in the future may be serendipitous rather than systematic. For multi-factor models, we would recommend that you ask:

  • Why were these particular factors chosen rather than others?
  • How were the weightings assigned to each factor chosen?
  • How many parameters are used in each factor?
  • How were the parameter values chosen?

Success Factor #3 | Analyse in detail the testing of the robustness of the parameters

If a manager is unable or unwilling to show you detailed results of stress-testing the chosen parameters using both in-sample and out-of-sample data, across a range of values, then this should raise very large red flags.

This should be one of your most important tests of the level of robustness of the manager’s approach and of the likely replicability of the simulations in live trading. Managers who use intellectual property or confidentiality concerns to decline to show you sufficient information to form a view as to robustness should be viewed with the utmost scepticism regardless of reputation, corporate lineage or even length of live track record.

Importantly, assumed parameters for transaction costs, slippage and interest rates used in simulations should be explained and justified, and compared with those achieved in live trading results. This is even more relevant for high-frequency traders.

Success Factor #4: Determine where the model ends and where manual intervention begins

A hybrid quant/discretionary process really means that you are reliant on the discretionary judgement of the manager rather than the quant model. We would argue that this kind of manager is not a true quant manager, but rather a discretionary manager who may, or may not, use the output of quant processes from time to time. We would recommend that very transparent criteria be associated with any subjective intervention, and a truly quantitative manager, in our view, has a minimum of discretionary intervention. The greater the intervention, the less reliable are the simulations.

Success Factor #5: A good quant model should have risk management built in

This observation may seem to be stating the obvious, but we have seen situations where risk management is applied ex post through a non-quantitative overlay. This approach can make the historical simulations less relevant, or even misleading in some cases, and is an example of the earlier problem of mixing quantitative and non-quantitative approaches under the “quant” banner.

More specifically, we would think that a sound due diligence process would want to know about:

  • Exit signals, not just entry signals, and the logic and parameter values around these
  • Position sizing – what is the chosen methodology and why
  • Manual versus automatic trade execution

Putting it all together

We would not presume to argue that there is one and only one method of successfully analysing quant managers. We would say, however, that you should keep a checklist with you to make sure that you are not being swayed unduly by reputation, AUM, confidentiality or complexity. We would summarise our position in the following table:

Table 1: Recommended Do’s and Don’ts with Quant Hedge Fund Due Diligence

Do’s Don’ts
Ensure the manager statistically justifies deviation from simplicity Don’t accept complexity as prima facie evidence of likely success
Place significant weight on the simulations, not just the live trading, and look for any discrepancies Don’t accept a good live track record alone as conclusive evidence that the models are working
Examine the manager’s reputation Don’t be blindsided by the reputation of the firm or its individuals
Be sceptical of managers who will not show you detailed parameter stress-testing using in-sample and out-of-sample data Do not accept the refusal, on grounds of confidentiality, to disclose detailed robustness testing
With multi-factor models, ask how the current factor weightings were determined and ensure this wasn’t done via optimisation. Also check all parameters on the same basis. The more complexity, the more checking. Don’t allocate to managers who selected key factor weights or parameters using optimisation or data-mining
Investigate the data used for the testing and the data used for the live trading Don’t confuse a quant manager, on the one hand, with a discretionary manager using quantitative tools, on the other
Look for risk management being integrated into the tested model rather than being bolted on later Don’t accept a high AUM as evidence that the models necessarily work – remember LTCM

We have produced a brief diagrammatic summary of our view of major steps in the quantitative due diligence process as shown below.

Click here for an enlarged preview

We would love your comments, including robust debate, in relation to any of these issues. Our intent is to improve the quality of quantitative due diligence and thereby help raise both the standards of quantitative managers and their acceptance within the investment community. If investors can drive this process then we all benefit.