Systematic Trading research and development, with a flavour of Trend Following

## Vince’s Leverage Space Model: better than MPT?

#### March 22nd, 2010 · 33 Comments · Backtest, Money Management

Ralph Vince‘s book Handbook of Portfolio Mathematics has been shamefully lying untouched on my desk for a few months… I started reading it but never finished it.

I recently found a 30-page paper introducing the ideas and principles of his Leverage Space Model. I thought reading it might be a good way to get back into Vince’s material.

Follows a summary of the paper (PDF download link), which is maths-less (only concepts and principles are discussed). It is a good introduction to Ralph Vince theories.

### Vince’s Optimal f

This is what Vince is famous for. It is basically a way to determine trading quantity (aka leverage) using the probability distributions of the trade outcomes.

If f represents the fraction of capital to wager (risk) on each bet (trade), the optimal value is the one which optimises the geometric growth of the bankroll (account balance).

In his previous books, Vince has defined the formula to determine the optimal f. The first part of the paper discusses the optimal f concept and is a good introduction for the non-initiated (showing how over-betting on a game with positive expectancy can and will result in a loss).

### Leverage Space Model promises

The optimal f section discusses a single-component approach whereas the Leverage Space Model deals with multiple components portfolio.

It is presented as an improvement on the Modern Portfolio Theory, briefly discussed. This is based on the following advantages:

1. Risk is defined as drawdown (instead of variance in the MPT)
2. The fallacy and danger of correlation is eliminated
3. Valid for any distributional form – Fat-tails are addressed
4. The model is all about Leverage, which is not addressed in the MPT model.

### Return aspect

The model starts by building a multi-dimensional terrain, drawing the overall expected return, based on multiple combinations of components in the portfolio and their respective f-values.

In this example the model builds the terrain for 2 simultaneous coin-toss with a payoff of 2:1. The x and y axis represent the respective f-values (leverage) for each of the bets/trades - while the z-axis (vertical) represents the expected return

The maximum portfolio growth is located at the peak of the terrain, resulting from the specific corresponding f-values combination. The terrain construction does not take into account correlation between the instruments – instead, the model uses the joint probability of two scenarios occurring simultaneously, dictated by the price data history.

### The Risk Aspect

So far, the model has only looked at returns. To introduce the risk component, you must determine your maximum allowed drawdown. This is a hard and fast rule: no combination should breach that limit.

Using a derivation of the risk of ruin, the model computes the risk of maximum drawdown for each set of f-values (for a specific timeline – as, in the long run, the risk of drawdown tends to 100%). If the risk of drawdown is too high, the specific f-values combination is ignored.

In practice, the initial terrain is truncated: by removing all points breaching the maximum drawdown threshold.

The terrain has been truncated: all areas deemed too risky, from a drawdown perspective, have been removed.

### The Algorithm

Vince implements a genetic algorithm to calculate the terrain, by initially calculating the expected return for each set of f-values, and secondly by running the maximum drawdown test on this same set. Once the whole set combination has been run through, the terrain is built (including truncations). The the aim is then to find the optimal set (highest return with lowest f-values).

The paper is rather short and does not deal with any of the maths behind the models. For this you’d have to get yourself a copy of the Handbook of Portfolio Mathematics which introduces the model in more detail or Vince’s latest book, dedicated to the Leverage Space Model, which has had a not-so-positive review by Max Dama.

The ideas in the paper are an interesting take on position sizing. Vince uses a simple objective/bliss function (CAGR with a binary risk/drawdown filter) to evaluate all possible scenarios of portfolio allocation/leverage. It might be interesting to use the concepts of the model with your own bliss function recipe.

One of Vince’s claim that the MPT does not address leverage sounds a bit simplistic – surely the percentage of Cash as an asset in the portfolio is an implicit measure of leverage. On the other hand, the approach on correlation/joint probability of scenarios sounds interesting and seems to go in the right direction. As Vince says:

Counting on correlation fails you when you need it the most.

Another point that seems missed out is how the model handles non-stationarity of the market. Vince mentions the chronomorphism of market prices distributions (i.e. they change with time) and even draws a betting comparison with blackjack – in which the optimal f curve changes for each card dealt. However there is no mention of how the model takes an adaptive approach to these chronomorphic distributions.

Vince’s homepage contains a link to the java software that implements his model (needs to register/leave email to download) and another one with a spreadsheet example. I have not had time to take a serious look at all those. Please let me know your feedback if you do.

Joshua Ulrich – blogger and reader of this blog (hello there: finally got round to adding you to the blogroll!) – is collaborating with Ralph Vince to port the Leverage Space Model to the R platform. His FOSS Trading blog is definitely worth a read too.

### 33 Comments so far ↓

• It would be interesting see these approaches plot on a walk forward format over time. Ie. If one had used the approach in 1990 till now, 1991 till now etc. to see how stable the approach was.

the second interesting points to plot would be show walk forward with data reset on a yearly and 2 yearly basis.

• I read that Leverage Space Model article a while ago, and think it is an interesting article. I particularly like the presentation of the concept “over-betting on a game with positive expectancy can and will result in a loss”. This is the Jesse Livermore syndrome: “if I am really certain I have an edge then I should go all in with leverage.”

I am sure that LSM has the capability to outperform MVO (as traditionally conceived) because LSM uses geometric returns and draw-downs instead of arithmetic returns and variance.

The assertion that MVO doesn’t deal with leverage is of course false, as the other link points out. One major problem that LSM shares with MVO is the optimization goal of maximizing return for a fixed level of “risk”. I have found this approach to always flounder in walk-forward testing. This is because the optimizer will over-weight the portfolio into the latest red-hot investment because “risk-adjusted” returns look so great right before a crash ;->

It would be nice if the article answered two empirical questions: 1) does the method have predictive power for future returns? 2) what is predictive power for future losses?

With the RiskCog portfolio optimizer I chose the opposite approach of minimizing risk for a specified level of return. (some people will immediately presume that there is no difference, and that’s true at a single point in time, however walk forward testing is about performance across time. You have to choose a single portfolio at each point in time not an efficient frontier.)

A related concept is that it makes economic sense to dollar-cost average when saving for retirement, and then it makes sense to withdraw savings in terms a fixed number of shares. As far as I know, this works because financial markets trend and over-revert, rather than distributing returns probabilistically. No matter what fat-tail, skewed distribution you conceive it is wrong if the process you are describing is not probabilistic!

@Nick: I don’t think that draw-down based optimizers would work reliably with only 1 or 2 years of data. I think they need more data to draw conclusions about worst case scenarios. A strength of MVO is that it can come up with “okay” portfolios using shorter periods of data.

• Hi RiskCog,
Thanks for the comments. It would be interesting to have a comparison between your model and the LSM (as it’s mentioned in your FAQ that you’d like to do such article)? They seem to share similarities (use of CAGR + drawdown as measure of risk)
I am assuming your model is proprietary and you will not be revealing the precious details of how it works?

• Hi Jez, it would be interesting to do a comparison. What I need is a LSM optimizer or an article which specifies an example LSM optimal allocation for a given set of assets at a given point in time.

The RiskCog methodology is simple and public “RiskCog optimal portfolios have the lowest risk for a given CAGR.” Where risk is a time-domain measure such as worst year or max draw-down. CAGR can be either real or nominal and is measured over the full period, median sub-period or worst sub-period; depending on which type of return is most important to you.

• Ok – thanks
I guess Vince’s material (java software or the spreadsheet on his homepage) could be a way of doing this although having only taken a brief look at it, it does not look like a 5 mins job (what is?).
I’ll let you know if I get anywhere further with this.

• RiskCog,

Your objective function does not have to be the geometric mean in order to use LSM. While most of the discussion has focused on maximizing return subject to some “risk” constraint, you can also maximize the probability of profit or minimize the probability of drawdown subject to some constraint(s)… or define whatever objective function and constraint(s) you deem most appropriate.

We (Ralph, Soren, and I) are working to generalize the LSPM-R interface to ease the specification of custom objective functions and constraints. Any/all input is welcomed.

Best,
Josh

• Josh,

Having a “framework” which implements the LSPM model by allowing the definition of a custom objective function sounds great. Probably a difficult question, but what are your timelines for implementing the LSPM in R? (I briefly checked the R-forge page and could see that you are currently in “very alpha” phase). Anyway – another good reason for me to start looking at R… And good luck with the project!

• Jez,

The “very alpha” status is mostly because it’s still being tested and the user interface may change. All the code behind the scenes doing the heavy lifting is pretty stable and mature.

We’re going to discuss the general interface / framework at this year’s R/Finance conference in Chicago. Assuming we get a good plan in place, it should take a month or two to code. So, it could be “beta” by June / July.

• Hi Josh, are you aware of any online links that show the “optimal f” equations? Also are their any articles or white papers that show an example portfolio of securities chosen using LSPM approach?

• RiskCog,

The LSPM R package is open source, so you could glean the equations from it. I don’t know of any free online links showing the equations; nor do I know of any white papers, or articles that provide an example of using the LSPM to create a portfolio of securities.

• Looking forward to the Vince/Macbeth presentation in Chicago – Jez, you should come. April 16-17

• On the word ‘leverage’.

http://bit.ly/5foLR

Clearly, we need more precision. I am still contemplating the topic, but when Vince uses the term ‘leverage’ he is not referring to a margin account versus a cash account. So when he says that MPT doesn’t use leverage, he’s not saying what you think he’s saying.

• I actually was really tempted and even checked the flights… The conference agenda looks really interesting and it would have also been nice to meet some readers.
But I remembered I have commitments (big family reunion…). Will keep an eye for the next one.

• Anarchus

@Milktrader, Joshua Ulrich, et al: as noted, “when Vince uses the term ‘leverage’ he is not referring to a margin account versus a cash account. So when he says that MPT doesn’t use leverage, he’s not saying what you think he’s saying.” This is very true. Not to be critical, but I don’t think we need to be more precise; I think Vince needs to be more precise. One of the shortcomings of MPT is that it’s ALL ABOUT LEVERAGE – in fact, in the most extreme case of MPT, Modigliani & Miller’s capital structure irrelevance principle, firms in the U.S. should all be all debt all the time because after-tax debt is a cheaper source of capital than equity (which of course is totally insane, but that’s another story).

• Anarchus

@RiskCog, I liked this comment “[using] the optimization goal of maximizing return for a fixed level of ‘risk’. I have found this approach to always flounder in walk-forward testing. This is because the optimizer will over-weight the portfolio into the latest red-hot investment because ‘risk-adjusted’ returns look so great right before a crash”. Part of the problem, of course, is that the real world does trend but it also mean-reverts. The person who figures out the correct balance between those two challenges will have a very good model.

I don’t know a lot about walk-forward testing yet, but with regard to “walk forward testing is about performance across time” and creating difficulty in either maximizing return with a risk constraint, or in minimizing risk with a return constraint, I’d think that trying to minimize the forecasting error of the walk forward model in the out-of-sample time periods would be an interesting avenue to explore.

If you’re familiar with the work of Dr. Andrew Lo in “A Non-Random Walk Down Wall Street”, he proposes several alternatives to MVO including maximizing the R-squared of the forecast (in sample). As a former half-quant & half-fundamental professional investor, I eventually tried to maximize the consistency of alpha with respect to the investment benchmark and had reasonable success doing that. In traditional finance terms, that’s most like trying to maximize the information ratio . . . . . .

• Anarchus

One other quick thought if it’s not off topic:

the biggest problem with constructing models of any kind in finance is that the data isn’t stationary over time. Walk Forward historical testing over multiple periods is an adaptation to help with the problem, but might not be the best approach.

Alternatively, if you can come up with a fairly accurate definition of “Regimes” that’s based on observable facts, so that there’s little room for argument over which “Regime” that you’re in, I think you’d have a killer model. One fairly successful quant money manager I know used to have three observable regimes for his model: (a) Fed is tightening, (b) Fed is easing or (c) Fed is neutral. If you look at stock market returns over time, they’re much more positive when the yield curve has a positive slope and the Fed is easing and much more negative when the Fed is tight or tightening and the yield curve has a flattish or even negative slope.

Looking further at levels of the Vix, or recent price volatility or at credit spreads (high yield versus treasuries) or liquidity spreads (t-bills versus cd’s or euro-dollars) might also be fruitfall in exploring definitions of Regimes.

• Anarchus

Last comment (promise!).

In traditional quant finance, early on there were the fixed-weight model builders and the variable-weight model builders. The fixed-weight modelers kind of considered themselves purists and the variable-weight modelers were, well, less-pure. Eventually a lot of fixed-weight modelers went to “regimes”, where you had one set of fixed-weights for one regime and another set of fixed-weights for another regime.

Myself, I’m trying to look into a modeling routine where individual factor weights vary across time as a function of an observable variable. If anyone has URLs for interesting trading or investment studies on this topic, I’d be interested.

• Hi Anarchus, decisionmoose.com uses a fed model as an input to the trading system. CXO advisory has lots of articles about indicators that could be used to setup a “regime” detector. http://www.cxoadvisory.com/blog/internal/blog-economic-indicators/Default.asp http://www.cxoadvisory.com/blog/internal/blog-fed-model/Default.asp http://www.cxoadvisory.com/blog/internal/blog-volatility-effects/Default.asp

I personally don’t do regimes, maybe I will some day, I just look at price and volume to decide what to invest in. I am getting more used to my models putting my account into the correct position before a move, then after the move I can read in the blogosphere why the political/fiscal/seasonal regime was right for this move. For example my currency model said “short Yen” a couple days ago and then we got a move in that direction today. The commentators are buzzing now – but not two days ago…

• Anarchus

RiskCog: Thanks very much. I have CXO Advisory on my browser list of favorites but haven’t plumbed the depths of their research archives. Interesting stuff they have there.

A lot of the “Fed Model” stuff involves looking at the nominal yield on the 10 year treasury versus the earnings yield on the S&P 500. That’s more of a stocks and bonds are substitute goods switching model, and in my experience it hasn’t worked very well at all. Also, I was at a large group dinner with Jeremy Siegel a few years ago, and he was pretty critical of the model for mixing apples and oranges – while the explicit treasury coupon is 100% nominal, the implicit earnings coupon on the S&P 500 is (very crudely) 50% nominal and 50% real.

The Fed models that work well in my experience are the ones that just either use the current direction of Fed policy in one of three states (easy, on hold or tightening) or else rely on the shape of the yield curve. And of course, the Fed tried to ease policy in 2008 and early 2009 without any positive effect on the stock market – though you can make a “hindsight is 20-20” case that the stock market eventually bottomed in March 2009 just as the Fed’s aggressive Quantitative Easing program was starting to kick in . . . .

• As most of use the term:

Leverage = How much can I borrow?

As Vince uses the term:

Leverage = f = Fixed Fractional Value

Every time you trade you have an f value. It’s simply the cost of the trade divided by account value. But you don’t always borrow, ramp up and lever your equity to put on a trade (ie, if you trade a cash account only)

• Milktrader, thanks for addressing the weird assertion that MVO doesn’t do leverage. I still don’t understand what Vince means though. With any type of portfolio optimizer I can think of, you can add T-bills as one of the asset classes to be optimized. If the optimum portfolio has a positive allocation to T-bills then that means the portfolio is de-levered to less than 1.0 of your stake. If the T-bills receive a negative allocation then that means that the best portfolio involves borrowing and levering up to greater than 1.0.

• I think that was more or less the point that Max Dama was trying to make (in his review post) and that I was trying to reiterate in this post…

• That’s right Jez.

Actually I think it’s interesting what Vince is doing with the LSM in his most recent paper, available at: http://parametricplanet.com/rvince/IFTA2009.pdf

My main criticism of the LSM book was the St. Petersburg betting strategy he suggested. His most recent paper addresses my complaint, by showing how to bet following the St P. allocation /with/ a drawdown constraint. The paper is also interesting because it treats position sizing in a psychological context, rather than a purely mathematical one.

I think I’ll have a project where I test Ulrich’s LSPM R package soon, and I’m really looking forward to the results.

Regards,
Max

• Max,

That paper is essentially the last chapter in his book, The Leverage Space Trading Model. My take on his small Martingale model is that he is basically enabling scaredy cats. Of course, if I had \$1B I probably would be more concerned with satisfying the irrational urges of my place along the Prospect Theory curve too.

What I believe he is getting at is that you can approach the n-dimensional portfolio space in any way you see fit. But he insists that you recognize that this space exists. With his small Martingale example, he is essentially providing the math for a unique approach to that space.

• That’s a good way of looking at it Milk.

• @Max – thanks for that link to the paper. I am not sure I could resolve myself to take that approach of consciously aiming for “sub-optimal” performance (as an investor or as a fund manager) – although I know we over-estimate our abilities to withstand “bad performance” (ie large drawdowns, etc.) – I havent read the Prospect Theory though..

@Milk good point about your objective being a function of your conditions – age could be a factor to consider also (ie maximise geometric growth when young and profitability probability when older)

• Guys, Just found this thread. A little clarification where I say LSP addresses leverage directly, whereas MPT does not.

I say this because LSP has variance embedded (negatively, properly) in its return aspect, rather than juxstaposed to it, as is the case with MPT.

If you recall the Pythagorean relationship between the three parameters of Geometric Mean, Arithematic Mean, and Standard Deviation in Holding Period Returns, the effect of this — and that the dispersion parameter, used correctyl in LSP — becomes evident.

-Ralph Vince

• Zach

• Thanks for pointing this out Zach – I have updated the post/link

• Raphael

Jez,

Ralph Vince has put in practice his LSP model creation.
He has created a series of funds. Dow Jones LSP Position Sizing Equal Sector U.S. Large-Cap 50 Index is one of them. Ralph Vince explains his Methodology : “The proprietary algorithm is a rules-based application of an investment strategy known as Leverage Space Portfolio, which was created by LSP Partners, LLC. The strategy aims to maximize the probability of positive performance, rather than seeking to maximize performance, by employing a risk-control process focused on draw-down management.”
Ralph then explains how the index is created.

Have you tried to replicate that methodology? What do you think of his application of the LSPM concept?
Ralph insists on reducing draw-down, but looking at the equity curve on his white paper, the system doesn’t seem to have done well during the 2008 crisis. Compared to the S&P 500 this draw-down looks similar to me. A simple timing approach based on ma200 would have fare better imo.

Raphael

• Raphael,
If you look around the blog, you should find some posts where I go more in detail and test the LSPM framework. I have not been successful in testing aspects of it like adding probability of positive performance (over profit optimization) or probability of MaxDD (yet) but I definitely think it’s a promising avenue to pursue – just been on the “back-burner” since then. Josh (Ulrich) has hinted at building a web-based implementation of the LSPM. I’m looking forward to checking that out – as well as the live performance of the ETFs based on Vince’s LSPM indices.
By the way, which white paper are you referring to (mind posting a link here?)
Thanks,
Jez

• Raphael

Jez,
The pdf document I was talking about is the 7th on the list.

I have indeed read your work on LSPM and follow Josh blog as well. I find Ralph Vince application of his LSPM very interesting ( see the rules for allocation) and was wondering if we could reproduce this strategy in Trading blox or R ?

Raphael