Au.Tra.Sy blog - Automated trading System Au.Tra.Sy blog - Automated trading System
Search
Vince's Optimal f and the Leverage Space Model – Take 1

Snow - can speed things up...

Following the Risk-Opportunity Analysis conference I attended earlier this month, I decided to test the model and the software used to implement it (Vince‘s java app and Joshua Ulrich’s R implementation).

Most of the mathematical formulas supporting the model are in the book Leverage Space Trading Model. I will not paraphrase the book and reproduce all the formulas here – but I will refer to some of them. Getting the book is probably a good idea for a better understanding of the concepts.

Test Case

The trading application of the Leverage Space Model is presented as a generalisation of the Kelly formula, which is well illustrated by the coin-toss betting example (as per Vince’s paper).

In a practical “trading” example, I have decided to look at the four Trend Following Wizards with the longest track records (that I have): Campbell, Dunn, John Henry and Millburn. All track records go back to 1985 (up to end 2009). The question this test tries to answer is this:

As an investor from 1985, what would have been the best allocation between the four managers?

Note: Here “best” simply means the highest CAGR (we’ll see later that “best” can be defined in different ways based on your preferences / utility function).

I will describe the concepts using the simple two-coin toss example and draw a parallel with the “real-world” application on the four Trend Following Wizards.

Optimal f

For a single stream of returns or betting outcomes/probabilities, there is a specific level of leverage, or fraction of capital, to risk on each event, which maximizes the geometric growth of the equity. This is what Optimal f relates to.

In the example of the coin-toss with the following parameters:

  • Risk 1 unit for each bet
  • Tails: lose 1
  • Heads: win 2

The fraction of capital staked on each bet will alter the expected growth rate as per this curve:

Optimal f

This has been covered extensively elsewhere (as a simple application of the Kelly formula) so I will not repeat the details here. The main point is that .25 is the Optimal f (meaning, in that case, that staking 25% of the largest loss on each bet will maximize the growth of the trading stake over time – any other value would be sub-optimal). Note that here the largest loss is equal to 100% of the bet size and therefore the Optimal f and the fraction of capital to stake are equal. This is a special case and we’ll see further down that these are usually different values, especially in trading.

Adapting this example to a single Trend Following Wizard track record, we can establish a posteriori what the optimal f would have been, in order to maximize the growth of the equity curve.

Here is the f curve for Dunn’s track record:
Dunn Optimal f

The shape of the curve is fairly similar, it peaks at around f=0.5.
Note that here, the optimal f does not represent the actual leverage or fraction of capital to apply. This is because of the nomenclature in the formulas used by Vince, where each “leveraged” return (or HPR: Holding Period Return) is expressed as:

HPR-formula

In effect, the leverage applied to each periodic (monthly) return is:

leverage-formula

In the case of Dunn, the biggest monthly loss is 30.68%. An f of 1 would equate to leveraging each period return by a factor of 1/0.3068 = 3.26, which is the maximum leverage that can be achieved before hitting a zero final equity (the HPR would be equal to 0 as soon as the biggest loss occurs). The actual leverage amount is independent of the biggest loss, but expressing it that way bounds the value for f between 0 and 1. This is really just a notation.

The optimal leverage for the Dunn track record can be derived from Optimal f = 0.5: Optimal leverage = 0.5/0.3068 = 1.63. This effectively means that an investor would have achieved the highest possible final equity investing in Dunn by resetting the notional account size to 163% of the actual account size, every month (this is theoretical as it ignores the (im)practicality of this and impact of fees, etc.).

Any other value, higher or lower would have resulted in a lower equity curve. At 100% (no leverage), the monthly geometric mean return is 1.11%; at 163% leverage, the mean return becomes 1.29% (which is the maximum value possible). Of course the drawdown would also increase (around 60% MaxDD at 100% leverage vs. around 83% MaxDD at 163% leverage).

Multi-Component Scenarios with Coin Tosses

Following on with our coin-toss example, let’s now consider the case of two simultaneous coin-tosses.

The method requires a discrete set of outcomes and associated probabilities. In case of the simple two-coin toss example, these are easy to identify:

  • 2 Tails: lose 2, 25% probability
  • 1 Tail and 1 Head: gain 1, 50% probability
  • 2 Heads: gain 4, 25% probability

Using this set of outcomes with the same concept as above for f optimization, this now gives us a 3-dimensional curve displaying each possible f-combination (each simultaneous coin toss has its own f). Each combination generates its corresponding growth rate (which is related to the Terminal Wealth Relative, TWR). The f-combination that generates the highest TWR is the “optimal” solution.

leverage terrain

In the case of the two-coin toss, the optimal f-combination is (0.23, 0.23) – meaning that staking 23% of the capital on each simultaneous coin toss (46% in total for each period) would result in the highest growth rate (over time). This is simply where the curve peaks in the chart above.

Adding a third simultaneous coin toss would simply generate a 4-dimensional curve (with each point representing the growth rate output of the f-triplet) and so on: the curve is always [N+1]-dimensional, where N is the number of components.

Multi-Component Scenarios with Trading Data

Let’s now look at our test case with the four Trend Following Wizards.

Similarly, we need a discrete set of outcomes and associated probabilities for our input . For this, we need to bin the data distribution and create the Joint-Probability Table (which holds each possible outcome combination and its associated probability – similarly to the 3 possible outcomes identified above for the coin-toss example). This is effectively how Vince does away with the concept of correlation input that can be used in other models such as Mean-Variance Optimization.

The pseudo-code to build the Joint-Probability Table (JPT) is as follows:

  • Bin each component’s stream of returns (ie. each Wizard set of returns)
  • Calculate a single outcome for each bin (for example the mean return of all instances falling in that bin).
  • Loop through each period and:
  • For each component, determine which bin the period’s return fall into; and assign the bin outcome to that component, for that period.
  • Record the combination of all bin outcomes (ie for all components) for that period and assign it the probability: 1 / number of periods
  • If different periods have the same combination, these can be grouped together (by summing the individual probabilities – as in the two-coin toss example where the Head-Tail and Tail-Head combinations are grouped into one at 50% probability).
  • The JPT is the full list of outcome combination and associated probability.

This is a bit tricky to explain in a concise manner, and it is developed in further detail in Vince’s book. However, the files attached at the end of this post in the “technical” appendix should help you retrace this logic.

Once the JPT is built, it can be used as an input for the f optimization.

Leverage Space Trading Model: Optimizing f with R

The java app developed by Vince creates the Joint-Probability Table from the import of the equity curve for each component.

The “meat” of the leverage space model code is contained in the R implementation by Joshua Ulrich (from FOSS Trading). This is where the optimization is actually run (Vince’s java app also implements the optimization but the R implementation is much faster).

Note that Josh has a blog post on how to create the JPT, so you might be able to only use R, should you want to experiment with the Leverage Space Model (I am not sure the java app is freely available).

The R LSPM package needs the JPT as an input, as well as the optimization parameters. Note that this is my first foray in R – so you definitely do not need to be any expert at it to run this sort of test. The fact that the java app generates the R commands directly was helpful, but you can probably “learn by example” by checking the R session file at the end of the post (there is more doc available on Josh’s blog or the LSPM project page anyway).

Using the JPT as an input, the LSPM package runs the optimization, which estimates the peak of the 5-dimensional curve. A genetic algorithm is used for the optimization; and after a specific number of iterations, the optimal set of f-values is output by the program.

For the four wizards, the respective optimal f values are as follows:

  • Campbell: 0.050767
  • Dunn: 0.000000
  • JWH: 0.322954
  • Millburn: 0.375109

Again, these are f values, which only relate to the leverage to be applied, via each component’s biggest loss:

Leverage values:

  • Campbell: 0.050767 / 16.7% = 0.304
  • Dunn: 0.0 / 30.68% = 0
  • JWH: 0.322954 / 27.32% = 1.182
  • Millburn: 0.375109 / 14.12% = 2.657

One interesting thing to note is the fact that some f-values can be assigned to 0, which basically means that the component does not add value to the portfolio with regards to the growth rate, which maximum is attained when excluding it.

An investor would have maximized the geometric growth of their capital by resetting every month the notional account size allocated to each manager according to the three leverage factors identified above. For example, starting with $100 million total capital, the first month would see $30.4M allocated to Campbell, $118.2M to JWH and $265.7M to Millburn. After the first month, the equity would have increased by 22.9% to $122.9M, which would then be re-allocated according to the same leverage ratios.

After 25 years of repeating the same process monthly, the $100M would become a theoretical figure of… $411.4 Billion thanks to a monthly geometric mean return of 2.81% (whereas an unleveraged equal-split across four managers – with monthly rebalancing – would result in a “paltry” $3.9 Billion). Of course, this improvement is only possible “in hindsight”.

The other implication from this sample application is that the optimal f value can dictate a leverage value which can be higher than the maximum allowed (be it by margin requirements or stock trading in a cash account, etc.). The Leverage Space Model caters for this, with the possibility of adding margin constraints (I have not looked into this yet but this post on FOSS trading talks about it).

Drawdown Constraints

One of the main problem usually raised with the concept of optimal f is that trading for growth rate optimization is often not realistic, as it generates untenable levels of drawdown and volatility. Most investors, traders or managers would happily give up some return to stay in their acceptable levels of volatility and drawdown.

I will not detail the formula here (I’ll refer you to the book again) but Vince presents a way of calculating the probability of a specific drawdown. The main idea is to introduce a risk constraint to the model, so that instead of solely optimizing for maximum growth rate, one can optimize with a constraint on drawdown. For example:

Find the optimal f values for which the probability of a 30% drawdown over 12 periods does not exceed 50%. The n-dimensional curve is constructed in the exact same way, but any f-values that result in a probability of drawdown over the constraint threshold are ignored – this would usually result in all values around the peak being discarded.

This is exactly what I ran on the same four Wizard track records and the f-values obtained were as follows:

  • Campbell: f=0.053919, leverage=0.3229
  • Dunn: f=0.002451, leverage=0.007987
  • JWH: f=0.297744, leverage =1.08984
  • Millburn: f=0.237712, leverage =1.68351

The monthly geometric mean return drops to 2.68% (final equity of $282.7B). The Max Drawdown figure drops from 79% to 68%.

Slow despite the Snow

The main problem of adding drawdown constraint to the optimization is the dramatic increase in computing time. Whereas the first optimization for the simple growth-optimal f values took seconds for 100 or 1,000 iterations of the algorithm, adding the drawdown constraint has a significant impact on the computation time.

The LSPM uses another R package: snow, in order to leverage multi-processors with parallel computing to speed things up. Info for the techies: I am running an Intel Core 2 Quad Processor @2.40GHz and I allocated three processors to the optimization process – but the running time came at a disappointing three hours for 100 iterations. This is only for four components and 300 monthly periods. I initially had the idea of running the daily equity curve of a few hundred components through the LSPM package but that will probably have to wait for the technology to improve.

I understand that this is mostly due to the heavy computational costs of evaluating the probability of drawdown. Maybe a less costly risk computation might make the running time more manageable.

First Impressions

This is a bit of an extended post (probably the longest on the blog so far) but it hopefully provides a decent step-by-step illustration of some of the concepts and how to apply them practically (there are more details in the “technical” appendix below).

The model is really an alternative portfolio allocation method and I can not see how it could be applied directly to determine position sizing for a trading system. This is primarily because all component returns have to be split across identical periods, whereas trades from a single system do overlap. It might well be possible to use each instrument’s equity curve when traded through a specific system. Something to investigate – but with the high computation costs, running the optimization over a large diversified portfolio might be all but impossible.

On the other hand, I can see how the model might be useful to the manager running a programme made up of several systems and wanting to optimize the allocation to each system. Alternatively, one could split the system portfolio into several asset classes (Financials, Currencies, Energies, etc.) and optimize the allocation to each asset class.

Another aspect worth looking into is how useful the model is in a forward-looking mode (ie to determine optimal f/leverage to apply to each component for the next periods) and how this can be used/configured (over the whole history available at that time or over a rolling optimization window? Which length of data to use in that window?). This would obviously be dependent on how stable the component returns are over time (like for any aspect of back-testing).

Technical Appendix

Below is some additional material to support the explanations in this post and illustrate the step-by-step process.

I used the java app to generate both the Joint-Probability Table and the R commands to run the optimization.

The app requires a file per component, containing the equity curve with constant position sizing (ie. no reinvestment). For the four Wizard track records, the equity curve is simply the cumulative sum of the monthly percentage returns (representative of a constant position size of 100). Below are the four CSV files:

Campbell.csv
Dunn.csv
JWH.csv
Millburn.csv

When importing the files, the app bins the period returns and generates the JPT:

java-app

The optimization parameters can be configured via the second half of the screen below:

optimizer-options

The parameters of the optimization defined above are: maximum probability of a 30% drawdown over 12 periods must be less than 50%. The Maximum Calculated Cycles of 8 is a number used in the drawdown probability calculation. As I understand it, the drawdown probability is extrapolated off an 8-period calculation and the computation time goes up exponentially with this number.

The R button generates the R instructions to run with the LSPM package. Below is a file containing the R session that I used to run the example in the posts. The first run is a straight optimal f calculation with 100 iterations. The second run is the same with 1,000 iterations and the third run is the optimization with drawdown constraint.

You can check the joint-probability table (contained in the outcomes and probs arrays) generated from the track record input files as well as the command to run in R.

R-Session

If you want to download R: http://cran.r-project.org/bin/, and the R LSPM package

Related Posts with Thumbnails

arrow23 Responses

  1. prazor
    49 mos, 1 wk ago

    Great article!
    Thanks for sharing this.

    Note: I am new to lspm and R.
    I ask a lot of question and don’t expect you to answer them all. Just wanted to get them out my head…

    I am not sure I follow you when you say you don’t see how lsm can be applied to trading systems.

    Isn’t the funds in your run examples of trading systems?

    Has anybody tested walkforward?

    Any own idea how to do a walk forward?

    I guess one would have to calculate the joint prob table incrementally in a loop and feed that to the optimizer and then keeping track of each runs’s result.

    As you indicate, what is a good window to use for the lookback? Also how big must the window be in order for the optimizer to produce valid results? Is there a way to calculate it?

    Could one solution be to feed the optimizer with bootstrap/dummy /realistic probs for the first periods?

    Anyway, very interesting topic!!

    Cheers!

  2. 49 mos, 1 wk ago

    Thanks prazor
    The funds use trading systems as underlying to generate their performance, however you only have access to their equity curve reported on a periodic basis – at which point you could theoretically re-allocate between each of them.
    This is quite different from the concept of position sizing for each and every trade that would be generated by a trading system when trading multiple instruments – as trades could overlap (eg 3 trades in Gold during 1 trade in Corn, etc.). The LSP model expects all components to have returns expressed in sequential and synchronised periods. As I mentioned in the post, there might be some workarounds which I will look into in another post.
    -Jez

  3. 49 mos, 1 wk ago

    For your other questions – I’ll cover them in later posts…

  4. Rick
    49 mos, 1 wk ago

    HI,

    Great post and explanation of optimal f but please allow me to be critical.

    I hope you realize that this analysis is based on hindsight, save the high cost for rebalancing that makes it unrealistic. Furthermore – and I think this is more important – increasing your stake in a fund does not cause in general the fund manager to increase his position size leverage due to liquidity issues. Only because of that, the method is detached from reality.

    IMO this method of allocation is for Monday morning quarterbacks. I do not see any real value here because the future drawdown is unknown. Future drawdown levels not in line with the ones used to allocate can turn this method highly sub-optimal.

    I still think that the mean-variance method of allocation is makes more sense, both fundamentally and in practice.

    This is what I think. Great work though!

  5. 49 mos, 1 wk ago

    Rick,
    Thanks for the comment.
    Of course I realize that the example used in the post is based on hindsight, hence my use of the past conditional tense (“would have been”). I also highlighted that this was only a theoretical example as the idea of resetting a notional account size every month (and other considerations) would make it impractical/impossible.
    This example is used more for an illustration of the model than for any “how-to” practical application in the real world – I dont think the message of the post comes out as an advice to apply this strategy “as is” with potential fund investments. On the other hand, I believe the LSPM concept could add some practical value to allocations between different components of a trading programme or system used by a trader/fund manager.

    Where I disagree with you is on the distinction you make with the Mean-Variance model. From a high-level point, the concept is identical: gather historical data and optimize the allocation between the different components based on this historical data. Then apply the allocation going forward. MVO does this by considering arithmetic returns and variance/covariance, LSPM does this by considering geometric returns and probability of drawdown (which is arguably more useful). Neither model “guarantees” that past data is representative of future data and both could produce sub-optimal allocations for future periods (you mention that future drawdown is not known, but neither are future variance and covariance – yet they are used as main input in the MVO model).

    Finally, the conclusion of the post mentions that one should investigate how the model performs on a “forward-looking” approach (“predicting” future optimal values based on past data) – which is really the only way it would offer practical value (I am not sure many people care for these sort of a posteriori “predictions” on past data). This problematic is identical with any sort of optimization based on historical data (including MVO) and will depend on the stability of the underlying inputs. This is something I will look at in a future post.

    Hope this clarifies things.

    Jez

    ps: thanks for using the expression Monday Morning quarterbacks – I did not have a clue what it meant before!

  6. Rick
    49 mos, 1 wk ago

    Hello,

    Thanks for the reply. I still think the method you presented is unrealistic contrary to MVO which is realistic. You wrote:

    “For example, starting with $100 million total capital, the first month would see $30.4M allocated to Campbell, $118.2M to JWH and $265.7M to Millburn.”

    But starting with $100 million you allocate a total of $414.3 million. This is a leverage factor of 4.143 and calculations do not include interest cost on the loan. Furthermore, I tried to make a note of this before; if an investor increases his subscription to a fund that does not translate immediately to a proportional increase in profits because the fund manager does not necessarily increase position size accordingly due to liquidity and other issues.

    On your comment about both methods relying on future data, MVO relies on quatities that can be non-stationary (almost) for various investments, like mean and variance. Drawdown is a “black swan” event. It cannot be predicted.

    I also have serious doubts that this method maximizes geometric growth regardless of being non-practical although I have not done the math.

    I think it would be good to compare this method to MVO but on equivalent amounts.

    Thanks again for the reply.

    P.S. Why you don’t work for GS yet? :)

  7. 49 mos, 1 wk ago

    Hi Rick, despite the fact that I presented the example without any consideration for practical implications (such as the issues you mention) , the leverage could be obtained through notional funding instead of fund borrowing (I agree this would have an impact if the performance numbers included interest…)
    In terms of maximizing geometric growth, I am pretty confident that it does. I suggest you read the book if you wanto double-check the Maths – but that part is pretty solid.

    GS? Definitely no thank you! ;-)

  8. Severus
    49 mos, 1 wk ago

    Great post! And also a great exchange between Jez & Rick. Planted some new ideas in my head.

    What’s GS, if I may ask ?

  9. Cornelius
    49 mos, 1 wk ago

    Hello

    and thanks for this post on Vince’s approach!

    The main issue in this approach seems to be the notion of “maximum loss”.

    Even when you confine your analysis to historical (past) returns and losses of managers/systems it is unclear what maximum loss to take!

    Why do you take the largest MONTHLY loss? Why not take the largest weekly loss – that’s most likely smaller and permits higher leverage ;-)

    Or, if you prefer the safer side, take the largest quarterly loss – that’s probably bigger and reduces your leverage and risk…

    So, for me up to now, this notion of maximum loss appears to be totally unclear. Did Vince himself mention anything how to apply this concept to managers & trading systems?

    Thank you again for the interesting post!

    Cornelius

  10. 49 mos, 1 wk ago

    Cornelius,
    The maximum loss used in the formula is mainly a notation used to bound f between 0 and 1, f effectively representing the ratio of greatest historical loss to risk (ie f=0.5 would mean that you risk half the biggest loss per trading unit per period). The actual leverage factor is largely independent from this largest loss (except that it cannot exceed the largest loss) but rather tied to the complete stream of returns.
    Here we used monthly largest loss because the period frequency was monthly and that is how the optimisation is run. If we used weekly periods, we would have picked largest weekly losses.

    Of course, there is nothing guaranteeing that future losses will not exceed historical losses (indeed on an infinite timeline this is sure to happen) and the process is only a posteriori optimization – so this somehow needs to be taken into account. Vince did mention a practical application of the concept in a more robust way, which I’ll cover in a later post.

    What is good to keep in mind is that this does not solve the problem of how to decide which leverage to apply on future data, it just formalises the “measurement” of leverage impact on performance/drawdown, etc. giving a more quantitative framework than simply running back-tests with different levels of leverage and measuring the performance in each case (MAR ratios, CAGR, etc.). You would still need to decide what leverage you would be happy to apply going forward, with all the uncertainty on future data – which is an educated “discretionary” guess in some respect.

  11. 49 mos, 1 wk ago

    @Severus: GS = Goldman Sachs

  12. Cornelius
    49 mos, 1 wk ago

    Thanks for your fast response and explanation – helped me to get more on track!

    Looking forward to your 2nd post on this topic.

    Greetings,
    Cornelius

  13. iain
    46 mos ago

    hi all

    i enjoy numbers and mathematics – except once it becomes difficult.

    we used to have a great business book shop (in my city) which had some special investment books – and some which were not so special! it is now selling upmarket paint & tiles. the books were led to a resting home called amazon – there is nothing like the feel of buying a hardback book for a fortune knowing what a great buy you have made.

    to make an omlette i believe the main ingredients are cheese and milk assisted by a hot pan.

    designing trading systems is not much more difficult. read vince’s book again and again until you know it backwards – but after you have read pardo’s book. then get some historical data – (lots if you can) and mix it up with that advanced excel you put away on the top shelf in the broom cupboard.

    the rest you can work out for yourselves.

    but (going back to Vince) i think some of you might need a rest – as you are getting a bit agitated over this largest historical equity drawdown being a moving target and unquantifiable estimate for future performance..

    you could easily try designing trading systems that have a known largest drawdown over a given time period. i understand buying call and / or put options options can assist you in your quest.

    I thought i had read Vince’s book thoroughly – but i am all set to read it again – as i must have been making a cup of coffee when i should have read the chapter that involved risking more than one’s equity capital. i only do that with my weekly shopping bill!

    it was a very useful article – and deserved thoughtful resposes.

    iain

    ps – if you are spending too much time on rebalancing the system, maybe a short stay in the curve-fitting rehab unit. you might be testing variables which have little inpact on performance, and for the ones that do testing for values that are far to close to each other.

    pardo’s updated book on optimisation is a pretty good follow-up to his original book. It is a very sensible read for optimisation and evaluation strategies. it can be also useful reading aloud when the mother-in-law pops round for Sunday afternoon tea and scones – (just passing you uderstand, as one does when on a thirty mile re-route). Pardo’s book is not quite thick enough for propping up the sofa if a leg falls off – but luckilly the mother-in-law’s is!!

  14. Verge
    44 mos, 3 wks ago

    This post is hugely beneficial for me. I am doing some research on LSPM. I know the post is old – but hope that you will see my question. My question is: You say (in the section with drawdown constraints) that :” The monthly geometric mean return drops to 2.68%” The R-Session file however shows a value of 1.027302 for $G – or “bestvalit”. Where does the 2.68% come from. In your first example (without drawdown) your “bestvalit” and geometric mean return is the same

    Regards

    Chris

  15. 44 mos, 3 wks ago

    Verge,
    Glad you’re enjoying the post.
    I calculated the 2.68% figure by applying the weights to each portfolio component and calculating the equity curve (done so in Excel)
    The 2.68% figure is the monthly geometric mean return based on number of periods and final equity curve amount.
    Not sure why the number is (slightly) different from the R session file.. Maybe some rounding issues or potentially a typo issue in Excel…

  16. 37 mos, 3 wks ago

    Jez,

    My thanks for the work that you share here.

    You said:
    “This is quite different from the concept of position sizing for each and every trade that would be generated by a trading system when trading multiple instruments – as trades could overlap (eg 3 trades in Gold during 1 trade in Corn, etc.). The LSP model expects all components to have returns expressed in sequential and synchronised periods. As I mentioned in the post, there might be some workarounds which I will look into in another post.
    -Jez”

    I wondered therefore if you wouldn’t mind sharing your thoughts on this and how you have been considering employing LSPM in this case where trades do overlap or are flat within any given HPR.

    Grant

  17. 37 mos, 2 wks ago

    Grant,
    I have not worked on that too much since this post but the way I was thinking at the time was of using each market in a system as a daily stream of returns (ie as a market-system as Vince calls them). A flat trade during any HPR (daily for example) would simply have a return of 0 for that market-system.
    I have not tested this though..

  18. Yomi
    37 mos, 2 wks ago

    All,

    Does anyone have a java or c++ or c implementation of leverage space portfolio model?

    Thanks

    Yomi

  19. 37 mos ago

    Jez: Thank you for your reply; your response is pretty much in line with what we are currently doing.

    Yomi: Here you go —> http://blog.fosstrading.com/search/label/LSPM

    Grant

  20. SKaRe
    21 mos ago

    Hello Jez,

    If I can ask, how is 23% derived for 2 Coin Toss? I am still getting it as 25% using Kelly Formula (Assuming that Kelly F is Optimal under special situation i.e Coin Toss)

  21. 20 mos, 4 wks ago

    SKaRe, the 23% is simply the amount derived from the 3-d curve plotted (i.e. not calculated by formula).
    If you think about it it makes sense for it to be less than for the single coin toss since two “positions” are taken simultaneously.

  22. SKaRe
    20 mos, 4 wks ago

    Oh Gawd, My my pair of eyes didn’t notice this 3D cruve. I wonder if this 3D curve can be poltted by us. The book does not detail anything :(

  23. 20 mos, 3 wks ago

    SKaRe, I believe you’ll find some info on how to “plot” these curves in Vince’s books but I found it better to actually use the R package used in the post above.

Leave A Comment