Systematic Trading research and development, with a flavour of Trend Following
Au.Tra.Sy blog – Automated trading System header image 2

Low Winning Percentage = more Robust Systems?

December 9th, 2010 · 19 Comments · Money Management

tiger-on-the-football-field
 
One of the characteristics that deter traders from using Trend Following is the typical lower winning percentage rate (i.e. ratio of winning trades vs. losing trades) of such systems. It goes against the natural instinct of “wanting to be right most of the time” as trades end up in a loss more often than not.

Psychologically, it is harder to trade a system that produces more losing trades. Despite this, Trend Following is a profitable strategy. Could there be a sort of “psychological premium” received by traders willing to use low winning percentage system?

When recently looking at Ralph Vince’s Leverage Space Model, I could see a potential reason why low winning percentage systems might be more robust.

Robust Systems are Volatile

An aspect of robust systems is the volatility in the system results and its equity curve. Similarly to low winning percentage, volatility is also a system feature that traders/investors nearly always want to avoid – at least from a psychological point of view.

Here is a quote from David Druz – a recent addition to the Trend Following Wizards report – which explains the link between robustness and volatility:

The robustness of a trading system is proportional to its volatility. This is the no-free-lunch part. A robust system is one which works and is stable over many types of market conditions and over many timeframes. It works in German Bund futures and it works in Wheat. It works when tested over 1950-1960 or over 1990-2000. Robust systems tend to be designed around successful trading tactics, classical money management techniques, and universal principles of market behavior. These systems are not designed around specific types of markets or market action. And here is the amazing thing about robust systems: The more robust a system, the more volatile it tends to be! This is because robust systems are not optimized to particular markets or market conditions. The converse is also true. You can design systems with excellent returns and low volatility on historical testing, but which work only for given periods in given markets. These systems tend to be curve-fit or market-fit and are not robust. For a system to have the highest odds of profitability over time and markets, the inescapable tradeoff is volatility. Diversification can be used of course, but it will only dampen the volatility so much.

A typical illustration of low-volatility, high-performance equity curve is this one below:

LTCM-risk

LTCM: smooth equity curve… for a while. No robustness there.

The Impact of Money Management

Money Management is a pivotal part of a trading system. It can make or break any system, however good it is. Over-trading a really good system will still lose you money.

Indeed, Vince affirms, in the Leverage Space Trading Model intro, that Money Management represents 100% of a trading system (leaving 0% for signal generation, etc.). This is probably a hyperbole, but the impact of money management should not be understated.

By looking at the impact of the the system’s winning rate on the money management part of it, there might be a reason as to why a system might be more robust when it produces more losers than winners.

Optimal f, boundaries and the Tiger Cage

Depending on preferences, there exists an optimal f which maximizes growth and which can take risk factors into account. The optimum approach is obviously to be as close as possible to the optimal f. However, this is not simple: markets change, systems go through different phases of performance and as a result, optimal f moves as well. It is not a fixed value.

From the start, we know that optimal f is bounded between 0 and 1 and that the further we are from it, the less optimal performance will be. Vince had an amusing analogy, comparing the optimal f to a roaming tiger on a football field. The tiger could be anywhere on the field; the hunting trader needs to find its location to place the tiger cage. Any error in locating the tiger would result in sub-optimal performance: the further the tiger is from the cage, the worse the system would perform.

However, there is a corollary from the Optimal f calculations: the optimal f value is bounded by the winning rate of the trading system. F can take values between 0 and 1 but if the system has a win rate of 30%, optimal f will always be between 0 and 0.3.

Going back to the tiger-on-the-football-field analogy, it greatly reduces the task of the “hunting” trader if we know that the tiger never ventures beyond the 30-yard line. And as a result we’ll never be too far off the target. From a trading point of view, this means that there is less room for error in the location of the optimal leverage and therefore less impact of a sub-optimal leverage.

In terms of robustness, this means that as markets change, so will the optimal f. A trading system with a low winning percentage will reduce the possible variation in the optimal f (e.g. [0,0.3] instead of [0,1]) and therefore the error between the actual f value used by a trader and the optimal f. Reducing the error should also reduce its negative impact on the system performance and make the system less sensitive to underlying market changes. In a word: more robust.

This is just a thought. I do not have any hard evidence or theory for it . As a young kid I always dreamt of having a tiger and riding it to school. I’d now like to think that it was an early subconscious call to be a good trader by focusing on “catching the tiger” of good position sizing/money management…

Related Posts with Thumbnails

Tags: ·

19 Comments so far ↓

  • Joshua Ulrich

    Jez,

    He’s an interesting thought: maybe we can take advantage of the fact that each market system’s optimal f value is between 0 and Win% when computing optimal f. Restricting the search space should help the optimization converge faster.

  • Jez Liberty

    You know what Josh? That’s exactly what I was thinking as I was writing this post this afternoon…

    Hopefully, that could speed the optimization up.

  • Michael Harris

    Hi Jez,

    Another thought provoking post by you.

    As far as the win rate, it is not constant but for all practical purposes you can assume it will be below 80% for most systems and below 40% for most trend-following systems.

    The problem Jez is not optimal f or whatever optimization used but the fact that for any win rate, there is a finite probability of ruin. The real danger in using optimal f is that finite probability of ruin.
    Now, the LTCM story was that they changed their leverage and although correct about market direction they could not sustain a small adverse move and they were ruined. Thus, the graph you have must be adjusted for changing leverage in order to reflect the real story. It is like trading forex using 500:1 leverage and be fully in margin. A 0.2% trivial change against the position will completely ruin the account.

    That is the reason – the finite probability of ruin – that forces most traders to use a small fixed percent risk knowing that they are way sub-optimal. The trade -off to consider is optimal growth versus risk of ruin. Only the Holy Grail has zero probability of ruin.

  • Trey

    Jez,

    Michael is spot on, you have to take into account leverage and LTCM was running something in the neighborhood of 30:1 when they blew up, in addition to be crowded out in the trade. This is really important, the idea that ‘alpha’ strategies have limited scalability. And you won’t find any ‘alpha’ strategies in a book or seminar.

    Don’t waste your time getting caught up in “% profitable”. Expectancy is far more important than % profitable and I would even argue that expectancy is , per se, meaningless. No money management scheme can make a system with negative expectancy profitable! In other words, money management IS NOT 100% of a trading system. With all due respect to Vince, optimal-f is a sure fire way to ruin.

    I’d love to see the evidence that “robustness”, which I’ve yet to see defined mathematically, is directly proportional to volatility. I interpret this as, extremely high performance is due to luck not skill and please ignore all the traders who went broke. Only the naive conclude that high returns = skill. You have to look at the entire distribution!

  • Michael Harris

    Hi there,

    I agree with everything Trey wrote.

    I think that %Kelly and other optimal methods are unrealistic, especially in the case of trend-following. Those who trade real money know how easy it is to get 4 to 5 consecutive losers during periods of uncertainty or turmoil. Imagine now that you are risking 20% of your capital on a single trade for the sake of optimal equity growth. Ruin is highly probable even though the account never reaches zero value because most funds will be liquidated after a 40% drawdown or even less.

    Using optimal methods just for allocation is a different story but proper use depends on objectives.

    In trend-following, for example, which was my first and very successful style of trading, the objective should be to open positions with the smallest possible amount of money and then accumulate slowly as price goes in your direction. Trends are preceded and followed by choppy periods where drawdown increases. There is no way anyone in the right state of mind would allocate significant amounts of money to a trading system that is experiencing a choppy market period for the sake of future optimal growth. Before the future comes, the present will kill your system. This is what my experience of trading the markets for the last 25 years has shown. I am sure Jez knows of these issues and will do some more of his impressive math studies to investigate them.

  • Jez Liberty

    The main point I was trying to “investigate” in this post was that low winning percentage systems potentially leave less room for error/divergence between the trader’s actual f and the theoretical optimal f. Less “leverage error” would mean less chance of over-trading and therefore more “robustness” (which, to me, means how the system handles changing market conditions).

    Note that optimal f is not necessarily the leverage level that maximizes equity growth (with the risk of a higher risk of ruin). It can also take into account risk of drawdown/ruin or other risk factors to discard “too risky” leverage amounts (this is one of the main additions to Vince’s early work on Optimal f with his new Leverage Space Model). You can check my recent post on this, where I explained this concept. I do not think any serious trader would use the “original” optimal f for position sizing, but would still need to be aware of the leverage curve and where they are compared to the optimal f.

    I do think that LTCM is a good example of bad money management – exactly for the reason that they “might have been right” but overtraded. If I remember correctly, they did not suddenly increase their position size/leverage but rather kept adding on to positions as the trades went against them (agree – there was also the problem of scalability and liquidity). If they’d managed their risk, took their losses early and moved on to new trades they might still be alive now (of course with a less “attractive” equity curve than their first few years). Anyway, I just like to poke fun at them as they were a big failure despite all their Nobel credentials and EMH academic background.

  • George

    Directional accuracy is not as important as it is touted. I made a simple test with Varadi’s DV2 on SPY. It is a daily MR like method. It was very successful in the last 5 years (you probably know). Its performance is about 30% CAGR over the last 5 years. I expected high directional accuracy (like more than 55-60%). However, its directional accuracy is ‘only’ 51.88% in the last 5 years.
    The comments here mentioned a ‘good TF system accuracy to be below 40%’. I thought tell you that a good MR system (like DV2) accuracy is not high either. To me, the directional accuracy is not really important.

  • Michael Harris

    Jez,

    You wrote:

    “…they did not suddenly increase their position size/leverage but rather kept adding on to positions as the trades went against them”

    Well, I don’t think anyone knows what they did exactly but the fact is they were overleveraged because at some point – I remember reading during that time – they have borrowed funds in excess of 10x their capital and they were caught short bonds when there was a flight to quality due to the Russian debt crisis.

    IMO a low winning percentage of a trend-following system may eventually lead to ruin. This is because the probability of getting a number of sequences of trades – not necessarily all losing – that can cause a large drawdown increases non-linearly as the win rate decreases. Even worse, the possibility of no trend during an extended period of time or even an early exit makes things much worse. You can see that from the simple equation I have derived in this paper:

    http://www.priceactionlab.com/Literature/profitability.pdf

    Implications for trend-following systems are discussed on page 5. It is clear that a given winning rate guarantees profitability only if future trends have at least the same magnitude as past trends. Many overlook this simple realization and face ruin.

  • Par

    As for LTCM there were several factors that led to the demise. One problem was they decided to return money to investors b.c. opportunities were fewer so they decided to increase leverage.
    But a 30:1 turned into a 100:1 not from adding to positions but because of adverse market moves, two factors that led then to not decrease positions were liquidity factors, they werent just trading liquid futures but more exotic convergence trades, and the fact that they knew/thought the trade would converge in due time. Problem was as Keynes said the market can stay irregular for much longer than you can stay solvent. Another problem was the banks, Goldman etc. Knew their positions after getting to take a peek at them with the help of the Fed.

  • Par

    George, problem with the DV2 i its not robust to other markets, it works well for S&P but has the opposite performance for other markets.

  • Pumpernickel

    I think you may get some ideas for very interesting further research, if you insert the phrase “I assert, without offering a shred of proof, that …” before some of Druz’s bold declarations. e.g.,

    I assert, without offering a shred of proof, that the robustness of a trading system is proportional to is volatility.

    I assert, without offering a shred of proof, that for a system to have the highest odds of profitability over time and markets, the inescapable tradeoff is volatility.

    In fact it may be quite enlightening to ruminate about HOW one might either prove these assertions, or at least amass a weighty body of confirmatory evidence. Your introspections may force you to decide that the assertions are ultimately not provable, and must simply be accepted as articles of faith, rather than rational conclusions based upon analysis of experimental measurements.

    Or, as I postulate, ruminating upon the how-to-prove-it question might well give birth to many trenchant observations and numerous insightful experiments, whose value greatly exceeds the mere dis- / confirmation of the Druz hypothesis.

    A small and insignificant side-branch of the ruminations might be: do Druz’s bold claims include and subsume the popular (non-Druz) assertion that “all market prices are fractal and self-similar on all timescales, therefore a trading system should work equally well in all timeframes” ?

  • RiskCog

    Perhaps it would be easier to prove a claim like “high volatility is associated with robustness” by describing the type of system some specificity. For example if we assume we are trading a trend following strategy with 2 static parameters, then we could assert that this system does not have enough degrees of freedom to closely follow a market for any long length of time. Therefore if close market following (in terms of predictive power) is observed then based on the assumptions we have to conclude that in other market environments there will have to be little or no market following power.

    The converse doesn’t necessarily hold: a static system that doesn’t predict the market well will not necessarily start working well later.

    If we want to consider complex trading systems that change behavior dynamically based on how indicators are performing across multiple markets, then the assumption about fewer degrees of freedom than the under-lying may not hold any longer. That is: a complicated enough system could potentially over-prescribe the behavior of the market. An example of over-prescribed system might be if there are many more non-linear parameters than market participants or number of shares traded in the market.

    “Robustness” is well defined in the field of “Robust Control” (though that doesn’t mean necessarily that the def is useful to traders). Wikipedia:”Robust control methods are designed to function properly so long as uncertain parameters or disturbances are within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.” That is to say that all “allowed” inputs map to outputs that do not lead to wild oscillation or other loss of control.

  • Jing

    Hi, RiskCog
    If you go to wikipedia and search for “Robustness (economics)”, you will find this definition:
    In economics, robustness defines the ability of a financial trading system to remain effective under different markets and different market conditions, or the ability of an economic model to remain valid under different assumptions, parameters and initial conditions.
    I think this definition is more appropriate to trading system’s robustness.

  • BeHappy

    Nice post, Jez!

    As you and Jousha pointet out this could possibly be used to speed up the optimization.
    Haven’t studied the source but will have a look.

    Looking forward to a follow up on this one.

    (Don’t really see why there are so many pointers by the majority of people commenting.
    Maybe they read your post the wrong way?
    And if I misunderstood the mood in their comments, pardon me!)

    All the best

  • K

    Haven’t been to your page in a while and just got caught up. Nice job as always.

    @Trey – most professionals I know have robustness well defined mathematically. Actually it can only be observed after the fact. You can have formulas to support your “hope” that your system will perform as expected and then you can have similar formulas to tell you that your system actually did perform as expected.

    Robustness is also like performance, defining which CTA is the best is not that straight forward. Defining which system is the most robust brings similar challenges.

    @ Jez – I would look at the indicator/ $mgmt relationship in a more holistic way. Not sure that one is more important than the other. I am also not sure about the Vol- robust relationship even though I observed the phenomenon. Say you use MA cross and you optimize for Sharp. The highest Sharp (lowest vol in terms of return) won’t be robust by definition, since those are the optimized outliers in the test and probably will return to meritocracy in real trading. Another issue is, what if the highest Sharp is not robust but still performs better that the most robust sharp after it breaks down.
    Also my question is will the lowest sharp ratios be robust (perform as badly as they did in the test? never felt like trying with real money : )

    Not sure these questions need answering or are worthwhile, but the philosopher in me finds them fascinating : )

  • Jing

    @K – about the question whether the lowest sharpe ratios be robust, I think one way is to get a range of parameters which have good sharpe ratios. Suppose we optimize a simple channel breakout system and find 27 day (just suppose) produces the best sharpe ratio, but if we change it to 25 day or 30 day, the result is not that good, we can say 27 day is not robust enough. On the other hand, if we find a range, such as 40-50 days, every number in this range can produce good result, we can just choose the median of these numbers (45 day) for our use. We can assume the median of a profitable range is more robust. Many optimizing software such as Trading Blox has the ability to show “Multi-parameter result” which show the result for each set of parameters in a graph, we can easily identify the profitable parameter range and then choose the median of them to get a robust parameter. Hope that helps.

  • Adrien

    Hi,

    I am an individual trader who does not have a solid background in statistics or Mathematics. However, I have found this blog very inspiring, especially when I am trying to build automated systems.

    I have long been annoyed by the problem of optimization. Can any professionals here give me some advice? This post tends to disagree with it. I have read other posts in the blog too. It seems that what the writer suggests is diversification rather than optimization.

    What are your opinions?

    Adrien

  • Martijn

    Hi Jez,

    I have a question on how to choose a portfolio when designing a robust trading system.

    I’m working on a new mechanical trading system. I think it’s a verry robust system: I’ve tested it on 30 different markets (shared among: forex, commodities, indexes, bunds). Indeed the equity curve is volatile and the win-ratio is quite low. Here are the stats:

    Winratio: 32%
    Average winner: +3R
    Average loser: -1R
    Time in the market: 19,95%

    Now, most of the tested markets seems to be profitable, but the are some markets which give negative result. (BUT; trading all 30 markets together the result is positive.) Is it wise to skip these unprofilable market? Or am I curve-fitting when I erase those losing markets out of my portfolio?

    Hope you can give me some advise.

    Kind regards,

    Martijn

  • Jez Liberty

    Martijn,
    Unfortunately I do not think that there is an absolute answer to this. Markets that have not “worked” for a while might suddenly take off. How would you like to handle this potential case: make sure you are never missing a move (include the markets) or cut these long-losing markets without regretting potential large moves they might bring to your portfolio…?
    sluggo on the TB forum often says: “when in doubt, do half” (I find this works in many areas in life) so you could decide to trade these markets at half the position size and see how that affects the results?

Leave a Comment