Systematic Trading research and development, with a flavour of Trend Following

## Practical Leverage Space Model – a More Realistic Test

#### December 21st, 2010 · 18 Comments · Money Management

How can you apply the Leverage Space Model in real-life trading?

The last study on the Leverage Space Portfolio Model was interesting from a theoretical point of view. Its main value was in illustrating the concepts of the framework with real-life data, but it did not have much practical application: on top of some “unpractical” assumptions, the study was done using hindsight: the optimal f values were calculated using past data, then applied to the same past data.

Works fine as a simple exercise but it is a critical flaw in real-life backtesting.

This post looks at a more realistic approach on how to practically apply the Leverage Space Model to Portfolio Construction.

### What sort of Test?

In this test, 6 systems, arbitrarily chosen from the State of Trend Following report, will be considered as potential components of the portfolio:

• Bollinger Bands Breakout – 20 days (BBO-20)
• Bollinger Bands Breakout – 50 days (BBO-50)
• Donchian Channel Breakout – 20 days (Donchian-20)
• Donchian Channel Breakout – 200 days (Donchian-200)
• Moving Average Cross-over – 50/200 days (MA-50-200)
• Triple Moving Average – 20/50/200 days (TMA-20-50-200)

The data used is the set of monthly returns for these 6 systems from 1990 to 2010.

One important constraint to consider for this test is to make sure that the optimal f values are not applied to the data used to calculate them: we’re aiming to prevent hindsight bias.

For the Leverage Space Model, we could simply divide the testing data in 2 sets: the optimization set (1990 to 1999) and the testing set (2000 to 2010) to which the values found from the optimization would get applied.

However, I decided to look into a more flexible approach: a Walk-Forward test (you can check a quick recap on Walk-Forward testing).

Every year, from 2000 to 2010, the f values applied to the portfolio for that year are derived from the optimal f values for the 10 years directly preceding (i.e. optimize f values from 1990 to 1999 and apply to 2000, optimize f values from 1991 to 2000 and apply to 2001, etc.).

Note that this involves running 11 optimizations instead of one. Ideally, the test would be run using “realistic” conditions, meaning an optimization with drawdown constraints, as opposed to pure equity growth optimization.

Because of the difference in running times for each optimization (literally hours vs seconds), I went with the “equity-growth” optimization: the classic optimal f concept, if you like. Optimal f, in this classical form, is often brushed aside as being too risky…
And indeed, in the fourth year of the period under test, a loss greater than 100% appeared. This ruin made it impossible to obtain a meaningful result with these conditions. Instead I decided to arbitrarily apply half of the f value (f/2) to be able to run the test

The main point of this test is to compare the performance of a walk-forward, practical approach versus the theoretical return obtained using hindsight. For both these tests, f/2 will be used instead of the optimal f.
Additionally, since the walk-forward results start from 2000 (1990 to 1999 being only used for optimization), other tests and results in this post will also cover the same period.

### Base Reference Portfolio

For the most simplistic portfolio construction, we can just divide the equity equally, without leverage, into each of the six components. With a monthly rebalancing, this gives us the following equity curve for the period 2000-2010:

Performance Stats
CAGR
41.26%
Max DD
26.03%
MAR 1.59

### The Hindsight Result

The hindsight approach is very similar to the last study: take all historical data (from 2000 to 2010), feed it to the optimizer and apply the optimal f values to this same data: this is the best portfolio construction that an investor could have used (when considering equity growth only). Based on the previous note, f/2 is used instead of the Optimal f, which gives us the following allocation/leverage factors:

• BBO-20: 0
• BBO-50: 0.022
• Donchian-20: 0.842
• Donchian-200: 0
• MA-50-200: 0.967
• TMA-20-50-200: 0

The numbers above are the “fraction” of total equity to be traded for each system. If trading these 6 systems with a 10M account size, the notional account size for the “Golden Cross” system (MA-50-200) would be 9.67M and so on.
Note that the optimized allocation excludes some of the systems: these are not required to reach the maximum account equity growth.
The total leverage on the portfolio is therefore 1.83: the 10M equity would be traded as 18.3M across the three systems with a non-zero allocation.

There is notable difference in CAGR despite the drawdown being nearly identical between the two allocations:

Stats Hindsight Equal Leverage
CAGR
87.52%
70.55%
Max DD
45.95%
44.23%
MAR 1.9 1.6

Let’s now look at the actual Walk-Forward test of the LSPM allocation.

### Walk-Forward LSPM Results

As explained above, each year’s leverage is calculated by running the LSPM optimization on the previous 10 years. This is done year after year, going forward.

The results are actually surprisingly good: they even top the hindsight “f/2” results. When comparing to the “full f” hindsight results, the CAGR is only slightly lower (CAGR = 126% for “full f” hindsight results).

Performance Stats
CAGR
106.56%
Max DD
65.91%
MAR 1.62

The allocation and leverage in the portfolio vary year after year. One could argue that this is an adaptive approach, which should perform better than a static approach and this single result seems to confirm that idea.
For reference, below are charted the different leverages in the portfolio and how they evolved over time:

A good share of the portfolio is always allocated to the Donchian-20 system, the rest being mostly divided between the TMA-20-50-200 at the start of the decade and the MA-50-200 at the end. Note also how the overall leverage is constantly dropping.

The equity curve comparison also seems to indicate an early over-performance, which does not seem to carry forward. This is confirmed when looking at the returns year by year:

### A Note on Drawdowns

The drawdowns in most of these tests are very high and no money manager would really consider trading at these levels. This is a classical reception to the Optimal f concept. The Leverage Space Model can cater for this by including constraints on probability of drawdowns – to keep them at reasonable levels.

The reason this type LSPM optimization (with drawdown contstraints) was not used is simply because of the very high computing times for the optimization but I would expect similar relative comparisons between the hindsight and walk-forward results – something to verify: material for a later post when my machine has a spare dozens of hours to run.

### More To Come

This is obviously a single test, which does not provide much significance, nevertheless, the results are quite exciting. But there are a few caveats. The test only uses equity-growth optimization, and we can only assume (or take a whole weekend of computing time to verify) that adding drawdown constraints to the optimizer would have given us a similar relative comparison (between the hindsight results and the walk-forward/adaptive results).

But this definitely warrants further investigation. One of the aspects that I like is how the process weeds out unnecessary systems. You could imagine running a stable of systems and let the optimization pick the systems to trade for each coming year and with which leverage.

Note also that there are other assumptions in the tests that make them less realistic (no account for slippage, funding costs, scalability, margin constraints, etc.) but these would apply to all the systems under comparison here.

On a final note, I have heard that a major index firm has decided to launch LSP indexes based on Vince’s implementation. These would be licensed to ETF providers, with the first LSP-style ETF probably being launched mid-2011. The concept seems to be catching on.

I’ll probably also share in a next post the code that was used to run the Walk-Forward test as it nicely automates the running of it. Stay tuned… (Update: the code can be found on this post: http://www.automated-trading-system.com/r-code-walk-forward-lspm/)

Tags: ·

### 18 Comments so far ↓

• “Note also that there are other assumptions in the test that make it less realistic (no account for slippage, funding costs, scalability, margin constraints, etc.).”
As I was reading this post, I was wondering about the effects of slippage, commissions and scalability. What is the system trading and what is the average holding period?

• Rick

Hello,

I enjoyed the post but keep in mind that at a 46% and 66% max DD you get for the optimal equity growth most traders and funds would stop their operation way before that. It appears that the base portfolio is much superior in the most important aspect of performance which is the balance between returns and drawdown. I may be wrong but the optimal f method and the leverage space model concentrate in increasing returns with no regard to their volatility. That leads to a dangerous mode of operation.

• I agree with Rick’s comments. From what I have seen on Collective2.com, investors will quickly abandon a system with a large drawdown presumably never to return.

• Fred,

I do not have the average holding periods or round turns figures handy but you can check the following posts where I had done a study on the impact of slippage.

Obviously, it depends on the timeframe of the system. The systems here are a mix of different timeframes as described in the beginning of the post (you can see the rules on the State of Trend Following report).

But slippage/costs would apply to all of the tests featured above and as such I do not feel they play such an important role for the relative comparison between their results.

• Rick,

Thanks for the comment. I did update the post based on your remarks. Basically, you are raising a very valid point about drawdowns being “too high” and this is one the reasons why Vince added risk constraints (Drawdowns) in the LSPM.
I did not run a drawdown-constrained optimization purely because of other constraints (mainly time of running the optimization). Ideally, I will run a drawdown constrained test so that we can see if the perceived improvement carries over in that test too.

The base portfolio is actually inferior from a MAR ratio perspective (which is one way of measuring return v. risk) and I have just added the stats to compare the hindsight portfolio to the equal-allocation portfolio at same leverage. Drawdowns are nearly identical, while the CAGR is 24% higher (87.52% vs. 70.55%).

I see your point and this post is in no way meant to be conclusive – which is hard to do when dealing with leverage, as every investor has different preferences.

• Rick

Fred,

I like your trend following system and your website. Good work! I will keep an eye on your system.

Lez,

The MAR performance of the LSPM is only marginally better as compared to the base model considering the extraordinary amount of work that went into the optimization. (1.59 vs. 1.62).

Only the hindsight results are significantly better as far as MAR (1.59 vs. 1.9).

Ratios are dangerous metrics. Note that 40%/20% is equal to 100%/50%. Yet, no sane investor would put any money in the latter.

Unless you or anyone else can provide convincing evidence that this LSPM can significantly increase performance without increasing risk, something I have yet to see, I am afraid I have to trash this method .

• Pretorian

Hello everybody,

I would like to give my humble opinion regarding the allocation for different strategies/systems since it is something that I have been working on lately. Imagine you have two systems: a long term Trend Following one and and options selling one. Using Walk Forward or any other form of dynamic approach would have given you a very big allocation to the options selling system prior to 2007, so the worst performance in history for this kind of strategy would have caught me with the biggest allocation, whereas I would have allocated very little to the Trend Following system which probably had one of the best years in history. It doesn’t matter whether I optimized with MAR or max F, the results would have given me the same type of allocation.
My solution? Design the systems with a similar mathematical expectation, allocate in equal weights and rebalance yearly. We never know what is going to happen!

• To be sure, I agree with the comments on the high drawdowns, and I too would not dream of designing a system with such levels.

However, I am still keen to test the LSPM approach as it feels it has some potential. In my experience, past a certain point, CAGR increase comes at the cost of a decreasing MAR ratio and the fact that both the base reference portfolio and walk-forward one have very close MAR ratio despite the walk-forward CAGR being significantly higher is encouraging. That being said, I just tried to leverage down the walk-forward approach to match the base reference’s performance and the MAR did not improve (which I would have expected it to) – an optimization with drawdown constraints might have given a better result though and I really need to test this. I might end up trashing LSPM as well – but not after giving it a thorough test.
@Rick – may I ask what method of allocation you have not trashed and are happy with?

@Pretorian: yes, this is one of the possible pitfalls of walk-forward approach: you can end up trying to chase your tail and always be out of phase with the current conditions, which is probably linked to the size of the optimisation/walk-forward windows. A more robust approach might be to have a longer look-back period for the optimization window, or indeed using the simpler option you are suggesting.

• Mike

Hi, am I missing something or does it look like the base reference end value varies between the initial study and the second two. Its looks like its under 100 in the initial case then over 100.

Thanks

• Hey Jez, I will run these trading systems through my optimizer if you want . I just need the LSPM cagr for each walk forward period, and CSV files for the various systems.

The approach my software uses is to minimize either the max-drawdown or the Ulcer Index for the sample period. I came to a similar conclusion as Pretorian that optimizing to increase return is very shaky ground.

• @Mike – the second study uses a leveraged version of the base reference portfolio (same leverage as the hindsight version = 1.83 or 0.305 per system)

@RiskCog – thanks, this could be an interesting comparison. I’ll contact you by email. In the mean time I will also try to run some optimization adding the constraints on probability of drawdown.

• Ali

Thank you Jez; really great website indeed. When I was reading; I thought what if similar analysis extended to each system separately with its components? So now we would have the allocation/leverage for each system, and allocation/leverage for each components of that specific system. I know that would mean daily/monthly rebalancing based on the optimal-f and LSPM calculation. But…… I would support the argument that the CAGR would be driven even to higher level.

Thank you again and I am keen to see the remaining posts of this series.

• Thanks Ali.
I am not convinced that this would add much to the process as the optimization should already have this single-system optimization “embedded” in it. For example, if we had 2 systems and the single system optimization told us to trade Sys1 at 2x leverage and Sys2 at 1.5x leverage. We would then feed these leveraged components to the multi-component optimizer and this would give us additional leverage factors to apply to the leveraged systems (let’s pick 1.5x and 1.3x). This is equivalent to saying trade original Sys1 and Sys2 at respectively 3x (2 x 1.5) and 1.95x (1.5 x 1.3) – which the optimization would evaluate when feeding the original systems.

• Ali

Valid point . But let me clarify my point further. Let’s suppose (For the sake of argument) the fund trades only one system but different markets. Normally; the fund would allocate different percentages to each market based on several factors (correlation/ equity size….etc).

Now; can we use the LSMP model to reach an optimum allocation for each market for that one specific single system?

• Agree – it could be useful to allocate to either different markets of a system or even between different systems. Actually, Vince calls all components in the optimization “market-systems”.

• Raphael

Jez
“I will also try to run some optimization adding the constraints on probability of drawdown”
Did you run the tests with constraints on probability of drawdown?

“I’ll probably also share in a next post the code that was used to run the Walk-Forward test as it nicely automates the running of it. Stay tuned…”
Do you still plan to share the code to run this test?

Thanks

• Raphael,
I did try to run the code with added Drawdown constraints at the time (shortly after writing that blog post) and had some problems with running it (ie it was never converging to a set of optimal f). I have not had time to revisit this since then unfortunately.
For the Walk-Forward code, I did post it a bit later on this post: http://www.automated-trading-system.com/r-code-walk-forward-lspm/
Cheers,
Jez

• Raphael

thanks Jez