This site uses cookies to improve your browsing experience. By continuing to browse the website, you accept such cookies. For more details and to change your settings, see our Cookie Policy and Privacy Policy. Close

Alternatives to ICM?

    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Well, no feedback from anyone, so I've tried the discrete version of the problem.



      The discrete model with N possible hands seems to be converging nicely to the continuous model, and the pushing and calling ranges are similar even with only N=25 possible hands, which is nice. The lines are a bit raggedy as the EV functions aren't very sensitive to small changes in ranges, as you can see from the smooth EV lines. My one worry is that the EV from the discrete model don't agree with the EV from the continuous model, even though you get basically the same ranges. I am troubled by this, but don't have time to sort it out now. I shall return!
    • nibbana
      nibbana
      Bronze
      Joined: 04.12.2009 Posts: 1,186
      Wow my brain just died.

      Pretty graphs jb, solid work
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Originally posted by jbpatzer



      The discrete model with N possible hands seems to be converging nicely to the continuous model, and the pushing and calling ranges are similar even with only N=25 possible hands, which is nice. The lines are a bit raggedy as the EV functions aren't very sensitive to small changes in ranges, as you can see from the smooth EV lines.
      FMP. I had a couple of things the wrong way round. All seems consistent now. Back to the three man bubble soon.
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      The discrete model seems to be a reasonable approach since most of the nash calculators (excluding the new beta of holdemressources) use hand rankings as well (for example the Karlson/Sklansky Ranking which can be found as a text file in the "Equilator"). They don't care for AK>76s>22>AK either.

      So the only difference to your model should be that they have N=170 (Possible starting hands in Hold'em if one doesn't distinguish between 7c6h and 7d6h) and their Hand strength functions which (hopefully) contain the real Hold'em equities.
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      So I've written the code for the discrete three handed bubble, and here are the results for N = 25 and N = 100. Again, these agree with the continuous model.




      I can now go ahead and wrap the ICM function or potential ICM+ functions around this and simulate some bubbles.

      I managed to finish this off whilst in a very boring Skype meeting. If only they knew! :D
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      Originally posted by jbpatzer
      I can now go ahead and wrap the ICM function or potential ICM+ functions around this and simulate some bubbles.
      Do you already have an idea how your ICM+ functions will look like? I've never come up with a promising ansatz satisfying the five conditions you posted earlier (condition five already assumes equal skilled players which should be fine for the sake of simplicity) and really adding something that ICM lacks.

      I used to toy around with an alternative (admittedly very naive) model, just to learn that it wasn't applicable in praxis (due to runtime) or at least not falsifiable.
      The idea was a monte carlo based "transmission" model.
      It transferred an amount c of chips randomly from one stack to another (c was raised after a fixed number of "transmissions"). If one player "busts", his finishing position is stored and the simulation goes on until one player has all chips. This is repeated a lot of times to get out the finishing place probabilities for each stack.

      The method had some pros and some bigger cons. I still think this might've given a better approximation than ICM since it's more like what actually happens in a poker tournament (in contrast to the "lottery-ticket"-like approach of ICM). It could've perhaps allowed to take the structure into account (by choice of c and raising speed of c) and one would've even had a chance to simulate edges by changing the probabilities of chip-gain and -loss. Unfortunately the runtime was way too bad and i can't think of a way to calculate the resulting probabilities analytically - so testing the model against ICM the way you're going to do it now with ICM+ wasn't possible. I'm still a bit frustrated to not know whether the tournament equities i calculated that way (which were close to ICM but with some differences concerning espacially the shortstacks) were actually an improvement. :f_cry: Oh well...
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      My first idea for a new function is to calculate the prize pool equities using ICM, and then use some additional function to shift a bit of equity from the shorter stacks to the largest stack (remember, I'm only doing a three handed bubble). If I can parameterise the shift somehow, I can play off an ICM+ player against either ICM players or other ICM+ players (needs some experimentation) and optimize. It's all a bit vague so far, but I'm working on it. I firmly believe that the best approach is not to try to use my, or indeed anyone else's, poker knowledge to do this. A 'monkeys typing Shakespeare' approach is the way to go. It might even be worth trying to train a neural network, but I'd have to learn some more maths to be able to do this. After all, what you're looking for is a function that associates three chip stacks with three prize pool equities (two independent parameters). For fixed blind levels (this is the easiest place to start), that's all there is to it.
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      Originally posted by jbpatzer
      My first idea for a new function is to calculate the prize pool equities using ICM, and then use some additional function to shift a bit of equity from the shorter stacks to the largest stack (remember, I'm only doing a three handed bubble). If I can parameterise the shift somehow, I can play off an ICM+ player against either ICM players or other ICM+ players (needs some experimentation) and optimize.
      I'm d'accord with the "monkey typing shakespeare" approach! Poker (and espacially tournament poker) has indeed proven to be quite unintuitive. I'd rather rely on mathematical analysis than experience.

      But imo, a parameterisation type approach risks to optimize your model's nash ranges against ICM optimal playing oppenents only. Since ICM isn't perfect, its nash ranges will be somewhat exploitable and therefore i guess you will be able to generate a positive ROI against two ICM players (three handed) using an optimized ICM+ parametrisation. This does not necessarily mean that your optimized set of parameters really yields a better tournament equity approximation. It could just be an exploitative strategy against ICM.

      edit: i could've saved the time to think how to write this in english since phzon said something similar earlier which i simply could've quoted :P
      Originally posted by pzhon
      Because it is 3-handed, it is conceivable that the other two players are essentially colluding against someone with a better equity model. Perhaps the player with a better equity model would still come out ahead, but not necessarily by an amount which represents how much better the equity model is. So, the ICM+ model which performs the best against ICM players might not be best in other senses.
      edit2: well, an exploitative strategy against ICM would be still "nice to have" :f_cool:
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Hmmmm. How about this. What we want is a model that accurately represents the equity of a given set of stacks. If I run lots of simulations using ICM and find that in fact ICM does accurately predict the proportion of the prize pool each player wins in the long run, there's no need to try to improve it. If it doesn't, I should try to choose the parameters of an ICM+ model so that it does accurately represent the equity, given perfect Nash equilibrium play. That would maybe be a place to start? And yes, I could do the exploiting ICM play thing too. In fact, if ICM+ represents the equity perfectly, and ICM doesn't, presumably ICM+ could exploit ICM as it would be more accurate.
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      Originally posted by jbpatzer
      And yes, I could do the exploiting ICM play thing too. In fact, if ICM+ represents the equity perfectly, and ICM doesn't, presumably ICM+ could exploit ICM as it would be more accurate.
      Agreed, since every exploitable strategy has to be inferior to the game theoretical optimum.

      Originally posted by jbpatzer
      Hmmmm. How about this. What we want is a model that accurately represents the equity of a given set of stacks. If I run lots of simulations using ICM and find that in fact ICM does accurately predict the proportion of the prize pool each player wins in the long run, there's no need to try to improve it. If it doesn't, I should try to choose the parameters of an ICM+ model so that it does accurately represent the equity, given perfect Nash equilibrium play. That would maybe be a place to start?
      Ok. Sorry for my confusion: What you're saying is that the tournament equities ICM predicts might be not equal to the tournament equities of the players if they play according to ICM nash ranges (same stack distribution assumed)?! Hmm, that makes sense - you're testing ICM for self-consistency in a way. It's like ICM suggests nash ranges which don't really lead to the tournament equities initially calculated by ICM. This couldn't be the case if ICM yielded the exact tournament equities and therefore the real nash ranges! Did I get your point? If yes, thanks a lot for the clarification! :)

      The problem i still see with the parameterisation method is that there probably won't be one optimal set of parameters for every stack distribution but might vary a lot in different scenarios. But that does not mean it's not worth trying! :)
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Originally posted by muebarek

      Ok. Sorry for my confusion: What you're saying is that the tournament equities ICM predicts might be not equal to the tournament equities of the players if they play according to ICM nash ranges (same stack distribution assumed)?! Hmm, that makes sense - you're testing ICM for self-consistency in a way. It's like ICM suggests nash ranges which don't really lead to the tournament equities initially calculated by ICM. This couldn't be the case if ICM yielded the exact tournament equities and therefore the real nash ranges! Did I get your point? If yes, thanks a lot for the clarification! :)
      You're not confused. I decided that what you and pzhon said made sense and this is a different suggestion.

      It's very odd doing this 'in public'. I often have to explain to my students that the smooth path from a difficult question to the right answer that you see in scientific papers is very misleading. In reality, you continually get things wrong, change your mind, and even change the question you're trying to answer. All this stuff just doesn't end up in the final paper. It's like being the ball in a pinball machine and trying to find your way out between the flippers. In the final presentation, the ball finds a smooth path out of the machine. In reality, you get your arse kicked repeatedly, and end up bouncing around randomly before you finally manage to escape.
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      I think I now have ICM working. Graphs below are pushing, calling and overcalling ranges with a 0.65:0.35:0 prize structure, equal stacks and N=100, 50 and 25 respectively. I can now simulate a large number of tournaments with everyone playing the Nash ICM ranges and see whether the actual long term winnings are equal to that predicted by ICM. I'll start with constant blind levels, and maybe add increasing blinds later.



    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      Originally posted by jbpatzer
      It's very odd doing this 'in public'. I often have to explain to my students that the smooth path from a difficult question to the right answer that you see in scientific papers is very misleading. In reality, you continually get things wrong, change your mind, and even change the question you're trying to answer. All this stuff just doesn't end up in the final paper. It's like being the ball in a pinball machine and trying to find your way out between the flippers. In the final presentation, the ball finds a smooth path out of the machine. In reality, you get your arse kicked repeatedly, and end up bouncing around randomly before you finally manage to escape.
      lol. so true!! and then poor students (like me) who have to work on those papers for their theses get to read stuff like "...if one applies these rather intuitive considerations, one sees immediately (after a trivial calculation)..." all the time and find themselves in the famous "WTF?! :f_confused: "-mode a lot :D

      edit: which stack sizes did you assume? equal stacks? your results seem reasonable. sb calls tighter than bb with short stacks but the ranges get almost equal for bigger stacks (as it should be).
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Yes. Equal stacks.
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      So I finally have something new and interesting to write about! :f_biggrin:

      Let's recap what I'm doing here. I'm looking at a Hold'em-like game, where relative hand strength is determined by a simple function that gives some qualitative similarities with Hold'em. In all the simulations below, the number of possible 'hands' is 25, ranked in order of decreasing strength from 1 to 25. Each 'hand' has an equal probability of being dealt to each player, and it is possible for players to be dealt the same 'hand'. In Hold'em, there are 169 possible starting hands, they are not all equally likely to be dealt, and their relative strengths do not decrease monotonically. Despite this, I strongly suspect that it's the dynamics of stack size and position that are most important on the bubble, not the intricacies of relative hand strength, and that this simple model captures this quite well.

      I have written some code in MATLAB that lets me calculate the Nash equilibrium ranges for each player. I can also simulate HU play and a three handed bubble. I've done some Monte Carlo simulations to see how well ICM predicts the EV of each player. For the bubble simulations there are 30 big blinds in total, and for all simulations the blind level does not increase (I could easily build in rising blinds, but let's keep things simple for now.). My reasoning here is that in a super turbo on Full Tilt the bubble often arrives when the blinds are 30/60 and the average stack is 600 chips.

      In the HU simulations there's good agreement between the proportion of the chips each player has initially and their EV over 10000 tourneys. This is not really surprising. Each player is using the same strategy, and there's no asymmetry to the problem, since it's just the size of the smallest stack that matters. Because of this, when I do my bubble simulations, I stop once one player is eliminated and then divide up the prize pool according to stack size in order to speed the calculation up a little. The plot below is for HU with initial stacks of 12.5 and 7.5 BB.



      The graph below is for a typical three-handed bubble with a 0.65:0.35:0 payout structure and initial stacks of 15, 10 and 5 big blinds. Note that I take care to rotate the initial position of the button at the start of each tourney, and that I simulate 10000 tourneys as I expect the error to scale with 1/sqrt(N), so I should get about 2 decimal places of accuracy. This takes about 4 hours on my PC. Before you ask, I'm not sure what sort of PC I have, but it is a couple of years old. I've never been really interested in hardware, and I just picture lots of little calculating elves running around a magic hamster wheel in the PC. I know that whenever I get a new grant or my computer officer is feeling generous, a new machine appears, all my code runs faster, and presumably the little elves have been given some sort of performance enhancing drug. I also know my latest machine has two hamster wheels in it. Anyway, as you can see there is a small, but I think significant difference between each player's EV and what ICM predicts. In particular, as nibbana and I originally thought, the big stack does better than ICM predicts, and the other stacks worse.



      No more graphs, but here are the numbers for some other situations. The first number is the initial stack size, and the second is the difference from the ICM prediction.

      0.65:0.35 (super turbo bubble)
      ---------------------------------------
      15 +0.017, 10 -0.009, 5 -0.008
      5 -0.010, 10 -0.004, 15 +0.014
      12 +0.006, 12 -0.002, 6 -0.004
      13.5 +0.008, 13.5 -0.003, 3 -0.005
      15 +0.016, 7.5 -0.014, 7.5 -0.002
      24 +0.016, 3 -0.012, 3 -0.004

      1:0 (freezeout)
      ---------------------------------------
      15 -0.007, 10 +0.001, 5 +0.006
      5 -0.001, 10 -0.002, 15 +0.003

      0.5:0.5 (satellite)
      ---------------------------------------
      15 +0.028, 10 -0.008, 5 -0.0020
      5 -0.032, 10 +0.014, 15 +0.018

      Notice that not all equal stacks are equal! If there are two equal small stacks, the stack to act first is at a disadvantage, and vice versa for two big stacks. ICM gives equal stacks equal equity and can never capture this. Also, stacks of 5, 10 and 15 gives a different answer to stacks of 15, 10 and 5 as their relative positions are different. Not entirely sure whether or not this is just noise in the simulation though. All these effects are amplified in a more extreme (satellite) payout structure, and (I suspect) disappear in a freezeout (winner takes all) structure.

      Just for a laugh (I really know how to have a good time! :f_biggrin: ) I also ran a simulation where the medium stack is a fish. As anyone who has played SnGs should expect, the fish sucks EV out of the big stack and gives it to the small stack along with some of his own EV. Sigh! :f_cry:

      Fish range (push 30%, call 15%, overcall 5%, any stack size, any position)

      0.65:0.35 (super turbo bubble)
      ---------------------------------------
      15 -0.002, 10(fish) -0.028, 5 +0.030


      So how can ICM be improved upon? I now think that the correct sense in which an ICM+ would be 'better' than ICM is that if we use ICM+ to calculate the Nash equilibrium, the results of simulations like this should give an actual EV that agrees with the EV predicted by ICM+. I would expect that an ICM+ Nash player would do better than an ICM Nash player, not by exploiting him, but because his model would be a more accurate representation of reality, so that he would therefore be genuinely unexploitable, and that the ICM player would be doing slightly worse because he miscalculates the position of the Nash equilibrium.

      Some more thought required now I think. What shall I try for ICM+? Some of the equity needs to be transferred from the small stacks to the big stack. Hmmmmm. I think I'll go back and read pzhon's and muebarek's posts more carefully in my quest for inspiration.
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      wow. now that's lots of interesting news! judging from your 3handed graph, the outcoming EV differences look stable after about 2000 simulations and seem to fluctuate by only +-0.001 or maybe 0.002 so as you said they're fairly significant and imo justify attempts on improving the ICM-model.

      also, these rather small EV-fluctuations make it likely that the positional effects you noticed aren't just statistical background. for studying the importance of position as function of the payout-structure it would be helpful to have values for 5 - 10 - 15 in winner takes all and satellite structures as well.

      since it's already quite late i'll give more detailed feedback tomorrow.

      you did a great job! :s_thumbsup: i'm also positively surprised that the runtime was still bearable since your nash calculation subroutine had to run so many times
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Originally posted by muebarek

      also, these rather small EV-fluctuations make it likely that the positional effects you noticed aren't just statistical background. for studying the importance of position as function of the payout-structure it would be helpful to have values for 5 - 10 - 15 in winner takes all and satellite structures as well.

      you did a great job! :s_thumbsup: i'm also positively surprised that the runtime was still bearable since your nash calculation subroutine had to run so many times
      I'll set my PC running on the 5-10-15 cases.

      I think the fact that there are only 25 possible hands is the main reason I can get this to work efficiently.
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      Ok. Time for some further commenting on this:

      First of all I have the impression that the EV differences you got out are really big (bigger than I thought at least ^^)! I think this is a result of two effects:
      - Of course the one for that you and nibbana initially started this discussion.
      - And imo (this is just a guess of mine), the bigstack’s play does not suffer as much from having a slightly wrong idea about his EV than the shortstacks do. The bigstack is going to bully them anyway on the bubble (he isn’t as risk averse since he can’t bust out) but the shortstacks probably tighten up their brokeranges too much caused by their EV-overestimation because having a higher $EV makes them more risk averse against the big stack (caused by how the $EV(stack)-curve looks like [second derivative < 0]). This is exploited by the bigstack.

      So my guess would be that the real EVs are somewhere in between the ICM value and the EVs you got out of your simulation.

      One could make the ansatz (more for thinking than for real calculation)

      EVdiff = real EVdiff + exploitative EVdiff

      where I’d assume that each shorter stack’s exploitative EVdiff grows with his ICM risk aversity (here I yet neglect the very important factor of position)

      But of course it’s hard to tell how much exploitative EVdiff is in there. Therefore I digged out my old monte carlo transmission model to compare its results to ICM and the results of your simulation which I named ICMresults (1 and 2 for varying position). Maybe it does not help but I guess it won’t hurt either. By it’s nature my model ignores position and nash pushing effects so it will probably have a similar self-inconsistency as ICM but at least it inherits the danger of short stacks to bust (which the lottery-type ICM does not include) so it might at least give a lower limit for the realEVdiff. The values I posted, I got of 100,000 simulations.


      Stacks: 15bb 10bb 5bb

      ICM 0.448 0.357 0.196

      ICMresults(1) 0.464 0.348 0.188

      ICMresults(2) 0.461 0.353 0.186

      Transmission 0.450 0.358 0.192

      ------------------------------------------------------------------------

      Stacks: 12bb 12bb 6bb

      ICM 0.3883 0.3883 0.2233

      ICMresults(1) 0.394 0.386 0.219

      ICMresults(2) 0.386 0.394 0.219

      Transmission 0.389 0.390 0.221

      ------------------------------------------------------------------------

      Stacks: 15bb 7,5bb 7,5bb

      ICM 0.442 0.279 0.279

      ICMresults(1) 0.458 0.265 0.277

      ICMresults(2) 0.458 0.277 0.265

      Transmission 0.444 0.278 0.278

      ------------------------------------------------------------------------

      What can be seen here on average in your (total chips = 30bb) setting, that position seems to contribute at least as importantly to the EVdifferences as the stack distribution. We can be quite sure that this isn’t just caused by bad statistics! Furthermore the corrections of the transmission model are quite negligible compared to the corrections your model provides. Both of these facts suggest that the future game is indeed what should be focused on and that the approach you chose seems promising! It really seems that a possible ICM+ could not only exploit ICM players but really be a better approximation all around.

      So what we already found out to consider when constructing ICM+:
      - Shifting EV from shortstacks to bigstacks. Probably choosing a value a little less than what came out of the simulation seems reasonable from my point of view but maybe you have a different opinion on that.
      - Position is a crucial factor. Its importance seems to depend strongly on the payout structure but that’s still to be tested since we don’t have a result on 0.5, 0.5 with 5-10-15 yet.
      - Of course price money conservation has to apply in ICM+.
      - Another guess: ICM+ should get close to ICM for an increasing amount of players left in the tourney (+ for smaller blinds) and should have the limit of cEV in Headsup


      I’d suggest to find out these things (which are imo important for constructing ICM+):
      - How changes the EV of one stack if the other two stacks swap positions (depending on stacksizes, payouts)? For example it’s interesting to see in your comparison of 15-10-5 and 5-10-15 that actually the 10bb stack whose position does not change has actually the biggest change of EV.
      - Studying some examples of extrem short stacks. take something like one criple and two big stacks or one huge bigstack and a folding war between two ultra short players.
      - Imo it makes sense not to look at the tournament equity only but to see the changes in differences of finishing place probabilities. This ansatz would allow you to completely separate the stack/position-dynamics (which are represented in these probabilities) from the payout structure. This could be very helpful to realise ICM+ since the payout structure can easily be inserted in the EV formula and all ICM+ has to calculate are the finishing place probabilities.
      EDIT: now that i think about this, this could cause problems since the nash ranges depend on the payouts and this is a major factor of your model


      Oh, this was a long one! Sry for that but as you can see this got me excited :)
    • jbpatzer
      jbpatzer
      Bronze
      Joined: 22.11.2009 Posts: 6,955
      Originally posted by muebarek

      So my guess would be that the real EVs are somewhere in between the ICM value and the EVs you got out of your simulation.

      Yes. For sure.




      So what we already found out to consider when constructing ICM+:
      - Shifting EV from shortstacks to bigstacks. Probably choosing a value a little less than what came out of the simulation seems reasonable from my point of view but maybe you have a different opinion on that.
      - Position is a crucial factor. Its importance seems to depend strongly on the payout structure but that’s still to be tested since we don’t have a result on 0.5, 0.5 with 5-10-15 yet.
      - Of course price money conservation has to apply in ICM+.
      - Another guess: ICM+ should get close to ICM for an increasing amount of players left in the tourney (+ for smaller blinds) and should have the limit of cEV in Headsup

      Agreed, but I want to focus on three handed for now.



      I’d suggest to find out these things (which are imo important for constructing ICM+):
      - How changes the EV of one stack if the other two stacks swap positions (depending on stacksizes, payouts)? For example it’s interesting to see in your comparison of 15-10-5 and 5-10-15 that actually the 10bb stack whose position does not change has actually the biggest change of EV.
      - Studying some examples of extrem short stacks. take something like one criple and two big stacks or one huge bigstack and a folding war between two ultra short players.

      I'm going to do these last two cases too. (4 runs = 16 hours = tomorrow morning).


      - Imo it makes sense not to look at the tournament equity only but to see the changes in differences of finishing place probabilities. This ansatz would allow you to completely separate the stack/position-dynamics (which are represented in these probabilities) from the payout structure. This could be very helpful to realise ICM+ since the payout structure can easily be inserted in the EV formula and all ICM+ has to calculate are the finishing place probabilities.
      EDIT: now that i think about this, this could cause problems since the nash ranges depend on the payouts and this is a major factor of your model

      :)
      I agree with your edit. :s_cry:

      I now have a vague idea for how to shift equity from the small to big stack, but want to try it out first.

      Add me on Skype if you like. (jbpatzer)
    • muebarek
      muebarek
      Bronze
      Joined: 31.07.2008 Posts: 532
      After yesterday’s chat I had some thoughts on the ICM+ idea you presented.

      I definitely see its merits. The correction is simple (which I like a lot) and only >0 for stacks in between 0 and s_mean. I’m not sure yet whether I do like having the maximum at s_mean/2 or not since the term will get quite small for ultra short stacks – so running the cases which I coincedently demanded yesterday is gonna give us some insight on that. It might actually be OK since the absolute number of tournament equity is small for the extreme short stacks so the corrections don’t have to be too big to still have a decent relative effect.

      I have some questions on how you’re planning to implement this:
      - Do you want to shift tournament equity (I’m gonna say TEQ) directly or do you intent to correct the stack and plug that corrected effective stack sizes (which aren’t the real ones any more) into ICM to get the new TEQ? You suggested k = 0.25 yesterday which makes me think you’re gonna use the second method. For a direct TEQ-shift I plugged in some numbers and a k around 0.004 seemed to give reasonable results (the maximum resulting EVdiff would then be 0.01 which fits your data quite ok)
      - How do you handle a 13-12-5 distribution for example? In which ratio will you split up the equity you’re taking from the 5bb stack among the bigstacks?


      I think it’s worth checking how this idea works but we still have the problem that this ansatz isn’t susceptible to position and payout-structure.

      But fortunately I came up with an (very naïve) idea on how to expand your idea regarding position:
      Judging from your results so far, it seems bad to have a shorter stack to the left and a bigger one to the right. So I’d correct every stack’s tournament equity by

      a*(sleft – s) + b*(s – sright)

      where a and b could depend on the payout-structure (raise with flatter payouts). My guess would be that choosing a and b around 0.2*k (if you make a direct TEQ-shift that is) might give reasonable corrections. Of course it seems too simple making it linear but at least that guarantees the conservation of total TEQ = total pricepool.

      What do you think about this?