September 26, 2018 - by David Hess
In 2017, our NFL Survivor Pool Picks customers reported winning more than four times as much prize money as one would expect, based on their pool size and their own number of entries.
This post reviews in detail how we measure the success of our subscribers in NFL survivor pools, and breaks down how our picks did compared to expectations across a number of different performance angles.
First, a quick history. We started posting survivor pool pick advice on TeamRankings.com in 2010, starting with Week 1 of the 2010 NFL season. At first we would just make a single “official pick” in our survivor blog post each week, assuming a modestly-sized, standard-rules survivor pool.
(Although that kind of one-size-fits-all pick strategy is far from optimal for many types of pools, as skill and luck would have it, our official survivor picks finished the 2011 season 17-0. Still, we didn’t have a great idea of how many people were using them, how much money they won as a result, etc.)
After three seasons of blogging about survivor picks, we then made the resource investment to build a full-fledged “product” featuring a range of survivor pool related data and features. We released the first version of our first premium survivor picks product for the 2013 NFL season.
However, before 2017, we didn’t have a precise way of knowing what types of pools our subscribers were playing in, how they were applying our advice to real-world survivor pools, or how they ended up doing compared to expectations for an “average” player. Early iterations of our survivor picks product simply focused on providing a ranked list of pick options by pool, along with some pretty generic written advice on how to split up multiple survivor entries. At first we weren’t even saving each user’s past weekly picks in our product, and taking those past picks into account to make pick recommendations for the current week.
Most recently, our NFL Survivor Picks product underwent a huge redesign and upgrade before the 2017 season. The level of data we collect and pick customization we apply is now leagues beyond what we offered in previous years. As a result, 2017 is the first year for which we can provide highly detailed breakdowns of our survivor pick performance.
In case you aren’t familiar with our NFL Survivor Pool Picks product, here’s a quick overview:
As far as we know, we are the only site that has a system to optimize survivor pool picks for such a broad range of survivor pool rules, and secondarily, for multiple-entry survivor pick portfolios.
Because we take so many strategy factors into account, the weekly survivor picks we suggest can differ by pool, by entry, and by customer.
First, the rules of your survivor pool can have a massive effect on what the best pick is each week. For example, in a pool that requires multiple picks per week late in the season, saving teams with cushy late season matchups is more important. The size of your pool also matters, with far-in-the-future matchups being less important in smaller pools, which are more likely to end earlier in the season.
Secondly, every Survivor entry is different, because teams can only be picked once. Even if the Patriots are the best pick in a given week for a given pool, some of a customer’s entries may have already used them earlier in the season, so the best available pick for each entry could be different.
Finally, even for entries with the same teams available, every survivor pick portfolio is different. A player with only a single entry is generally going to want to pick the best available team for that entry. But if a player has a 10-entry portfolio, and the Rams look like the best available pick for all 10 entries, it generally makes sense to pick a team other than the Rams with some of those entries, in order to avoid putting all of their eggs in one basket.
The combined effect of having all of these strategy factors accounted for in pick advice leads to a very high level of complexity in terms of calculations. It can also lead to a fairly wide variety of recommended picks depending on a specific subscriber’s situation. That’s especially true late in the season, when the available teams on each entry dwindle. In Week 17 of the 2017 season, for example, we suggested 17 different teams as a pick to at least one customer. (Of course, most of those were extreme corner cases. Only 8 different teams were suggested to more than 1% of users. And only 3 different teams were common pick suggestions in standard pools.)
The high level of customization that we now apply to survivor picks means that there is no “official TeamRankings survivor pick” in a given week. Instead, we have a distribution of picks that represents the sum of all the various picks we recommended across our entire subscriber base, based on their individual pool rules, past weekly picks, etc.
Consequently, there is no single set of picks we can track to tell us whether our suggestions did well. In addition, all we really care about is whether our pick recommendations give our subscribers an edge in their survivor pools. It’s impossible to determine that if all we know is that the top pick we advised to a given person survived 5 weeks, or 7, or 15.
So there’s only really one good way to measure the effectiveness of our NFL Survivor Pool Picks advice. We ask subscribers directly how our picks recommendations did for them, via a survey we email out at the end of the season.
In order to get custom pick advice from our NFL Survivor Picks product, customers have to set up their pool(s) on the site. That involves telling us their pool rules, as well as the overall pool size, and how many entries they personally are entering in the pool.
The end of season survey asks customers how they did in each specific pool they set up in our system. This allows us to not only get an idea of the overall performance of our pick suggestions, but also to look at how they fared based on various splits of the data (by pool rules, by pool size, etc).
Knowing how many customers won their pool is nice. But to get a real sense of whether our picks are providing an edge, we need to know what the baseline expectation should be. Is winning a pool 5% of the time good? 10%? 20%?
To define our baseline expectations, we assume every player in a given survivor pool is equally skilled. Then we calculate what percent of the prize pool our subscriber would expect to win, based on the number of entries they submitted and the overall pool size. That math is simply the number of customer entries divided by the total number of pool entries.
This gives us the expected prize share for every customer in every pool. It tells us how much our customers would expect to win if our pick advice was not providing any edge in the pool.
To calculate the actual prize share, we ask customers (1) if they won their survivor pool(s), and (2) how many other entries they had to split the pot with.
If they won, then their actual prize share is simply 100% divided by the total number of entries splitting the pot. If they won the whole pot, their prize share is 100%. If they split the pot with 1 other entry, their prize share is 50%. If they split the pot with 2 other entries, their prize share is 33.3%. And so on.
Dividing the actual prize share by the expected prize share gives us a “Winnings Multiplier,” like 2 or 3. This Multiplier number tells us that our customers won 2x or 3x as much prize money as you’d expect an average person in the pool to win.
If our Multiplier is greater than 1, that means our pick advice has been delivering an edge, on average.
Now that we’ve explained our methodology for measuring success, let’s examine how our NFL Survivor Picks customers did in 2017, compared to an “average” pool player:
|Year||% Won Pool||Avg % of Pot Won||Actual||Expected||Multiplier|
Our customers won a prize in 24% of their pools in 2017. Their average “% of Pot Won” was 49%, which indicates that on average the winning customers split the pot with one other person.
That gave our customers an average Prize Share of 12%. Based on their number of entries, and the overall size of their pools, we’d expect them to earn only a 2.8% prize share, if our advice was providing no edge over the rest of the pool. What we actually saw, though, was that our customers won over 4 times as much as expected.
The numbers above show overall performance. However, we provide picks for a wide variety of pool rules and sizes. It’s worth looking at performance by pool type or by other factors, to see if only certain types of pools perform well, or if the edge holds across various types and sizes.
First, here is customer performance by type of pool. This table is sorted from the most common pool type to the least. Also, note that we support combinations of these types, but if we break it down any further, the sample size gets too small to be meaningful:
|Pool Features …||% Won Pool||Avg % of Pot Won||Actual||Expected||Multiplier|
|Season Wins Tiebreaker||21.3%||51.5%||11.0%||1.9%||5.9|
|Continues Into Playoffs||31.6%||59.7%||18.9%||6.7%||2.8|
As you can see, in 2017 our picks delivered an edge in all types of supported pools, except for pools featuring Byes. It’s worth noting that:
Now, here is performance by pool size:
|Pool Size||% Won Pool||Avg % of Pot Won||Actual||Expected||Multiplier|
This is a pattern we’ve seen before in our office pool product performance. As pool sizes go up, the absolute win rate goes down, but the edge delivered by our picks goes up. This makes some sense. If you start a pool with, say, 20% win odds, realistically there’s an upper bound on how much we can improve that. We also suspect there is more “dead weight” in huge sized pools — players who just make dumb picks because they either don’t know any better or don’t put in the effort required to do so.
One note on the 10,000+ pool size bin, which shows a 0% win rate. The sample size in that bin is small enough (less than 100 pools) that even if we delivered a 10x multiplier, we wouldn’t expect to see any wins. A 10x multiplier would move your win odds from 1 in 10,000 to 1 in 1,000. So this sample size is simply too small to tell us anything very meaningful about our edge in giant pools.
Finally, here is performance by number of user entries in a pool:
|Number of User Entries||% Won Pool||Avg % of Pot Won||Actual||Expected||Multiplier|
We delivered an edge for our customers no matter how many picks they entered in a pool. The sweet spot seems to be around 6 to 10 picks.
Smaller edges with even more entries makes some logical sense — if there is one ideal entry, then every successive entry you place in a pool has a lower expected return-on-investment than the previous one. The sample sizes (not shown) on some of these bins are fairly low, though. So we’re not totally sure how much of this trend is real, and how much is random.
Our first year of highly customized, automatically-updating survivor portfolio picks covering a huge variety of pool types is in the books.
Based on these subscriber survey results, moving from generic weekly write ups (which by their nature can’t cover every little rules wrinkle, and can’t update as input data changes) to a customized, automated system was almost certainly a strongly profitable refinement for our customers. That was, of course, the motivation for making some massive improvements to our NFL Survivor Picks product during the summer of 2017, so it was great to see an immediate impact.
Even in great years for our picks overall, every customer isn’t going to win their pool — not even close. But our customer base on average winning over four times as much as expected is a clear demonstration of the edge our product delivers. If that edge holds for long term customers, the investment in TeamRankings survivor picks should pay off extremely well.
Printed from TeamRankings.com - © 2005-2022 Team Rankings, LLC. All Rights Reserved.