## Saturday, March 24, 2018

### Optimizing Lineups In the Most Painful Way Imaginable

Aaron Judge batting leadoff? Brandon Drury batting cleanup? What is this guy smoking?

If you haven't read Marc Carig's excellent piece on the Yankees' attempts to optimize their lineups, you can read it with an Athletic subscription here. If you don't have an Athletic subscription (first of all, wyd?), here are the lineups that I generated for Carig as most optimized and least optimized against lefties below.

How did I come to these lineups? How good are they against lefties? In this article, I'll detail the exact methodology for generating these lineups along with some figures.

The first step is to identify the run environment that our lineup will be playing in. Run environments generally are fairly consistent from season to season, but given the recent and dramatic trend upwards in run scoring, I used the 2017 AL run environment for this work.

I pulled MLB play-by-play data from 2017 AL teams at home (with the DH in play) and generating an RE24 matrix for that season for AL teams.

0 out1 out2 out
___0.520.290.11
1__0.890.540.23
_2_1.120.690.32
12_1.360.940.38
__31.490.940.44
1_31.731.190.50
_231.951.350.59
1232.321.60.71

I then used a similar methodology to the one used by Tango, Lichtman, and Dolphin in Chapter 5 of The Book to break down the run values of different plate outcomes (generic out, strikeout, walk, single, double, etc.) depending on where in the lineup each occurred.

Batting SlotGeneric OutStrikeoutWalkHit By PitchSingleDoubleTripleHome Run
1-0.27-0.260.310.330.420.711.151.30
2-0.27-0.270.320.310.450.700.961.36
3-0.27-0.260.290.330.440.741.101.40
4-0.29-0.280.300.320.460.771.101.42
5-0.29-0.290.300.340.460.781.041.41
6-0.28-0.270.310.330.430.721.111.40
7-0.28-0.270.300.330.450.710.991.43
8-0.28-0.280.310.310.450.730.971.43
9-0.27-0.280.310.360.440.741.041.32

We can check our results intuitively. For example, since lead-off hitters come to the plate most frequently without runners on, an out (strikeout or generic) is least harmful for lead-off hitters compared to other lineup spots, because it's not stranding runners. Meanwhile, since cleanup hitters come to the plate most frequently with runners on, home runs are more valuable for them than they are to any other hitters.

A final step would be to look at how frequently each lineup order comes to the plate in the course of a season. A player batting in a particular batting slot received a particular number of plate appearances on average, as shown below.

Batting OrderPA
1757
2738
3720
4705
5687
6670
7652
8633
9613

Now, we grab our projections. For the Yankees article, I got the Yankees' projected values against LHP from Steamer (thanks, Eno and Jared!) and broke their projections down into Per-PA rates for each plate outcome (BB/PA, HR/PA, etc.). Then, I calculated the Runs Above Average/PA value for each player for each lineup slot (Gary Sanchez is worth .0349 RAA/PA batting second, but only .0293 RAA/PA batting leadoff).

Finally, I generated potential defensive configurations for the Yankees and then generated all possible permutations of the lineups (all 362,880 of them). Then, using the expected PA over the course of a season, coupled with each player's projected RAA/PA for their spot, I calculated the RAA value for each lineup over the course of a season.

The result? Optimized lineups based on these Steamer projections. Here are the best and worst lineup configurations, with their run values alongside them.

Lineup OrderBest OrderRAAWorst OrderRAA
1Aaron Judge30.39Didi Gregorius-29.43
3Gary Sanchez26.9Aaron Hicks-5.69
4Brandon Drury-4.95Brett Gardner-22.89
5Aaron Hicks-10.75Gregory Bird-5.97
6Gregory Bird-5.92Brandon Drury-7.44
7Brett Gardner-21.39Gary Sanchez20.79
8Didi Gregorius-25.47Aaron Judge26.23
Net RAA Value26.38Net RAA Value-8.51

So, Judge is best used at the top of the order because his strikeouts hurt less and his walks help more. Stanton bats 2nd because as the projected best hitter in the lineup against LHP, he gains the most in terms of additional PA and baserunners ahead of him.

The Net RAA Value represents how many runs the Yankees would score with a given lineup over the course of a full season. So if the Yankees were to face an LHP for 162 games and only use the best lineup, they would score 26.38 more runs than an average 2017 AL lineup, or 789 runs. If they used the worst lineup, they would score 8.51 runs less, or 754.

Considering that the Yankees don't face LHP over the entire season, the advantage is even less. Teams face LHP about 20-25% of the time, so the advantage of rolling out the optimized lineup over the least optimized lineup is worth only about 7-8 RAA over the course of the season. And considering that a team would never run out a lineup like the worst lineup shown, the true advantage is only something like 3-4 RAA. Almost insignificant. But - as Carig says, "the Yankees refuse to settle" - they're not willing to let advantages like this pass them by.

## Monday, March 12, 2018

### Simulating the NCAA Tournament

This is March, as Jon Rothstein likes to remind us all, and with March comes the NCAA Mens Basketball Tournament! Bracket-mania is undoubtedly sweeping your social groups/school/workplace/inner cabal, as it is mine, and with it, questions of "How do I fill out my bracket?" "What upset should I pick?" "Why did my wife leave me?" "Who should I pick as my Final Four?" I too have been confronted with such questions, and since I have an extreme aversion to decision making, to avoid my phobia I created a March Madness Simulator to aid in my bracket making (and yours too)!

The simulator works using data from Kenpom.com, probably the single best college basketball analytics site (no, I'm not biased in giving someone at The Athletic a plug - I challenge you to name a better college basketball site anywhere). I scraped Adjusted Offense and Adjusted Defense scores for each tournament team, then feature-scaled each to a value of 0.5. Then, I subtracted the feature-scaled Adjusted Defense score from 0.5, to put the values in ascending order from best to worst (by itself, Adjusted Defense is best when it's lowest, but by subtracting it from 0.5, the best scores are the highest). Then, I added the two scores together to determine a teams' overall ability. We'll call this their "Power Score". The top 10 teams in the country by Power Score are displayed below.

SchoolPower Score
Virginia0.866
Villanova0.858
Duke0.829
Cincinnati0.803
Purdue0.799
Michigan St.0.797
North Carolina  0.781
Gonzaga 0.776
Kansas 0.758
Michigan 0.757

After calculating a power score for each matchup, I then set about simulating tournament brackets. To simulate a matchup, I first calculated a teams' expected odds of winning using Pythagorean Expectation. Ken Pomeroy discussed how an exponent of 10.25 is generally most accurate when dealing with adjusted scores such as his own, so I used these scores to calculate win probabilities. For example, let's say Virginia (Power Score of .866) played against Michigan (.757). To determine Virginia's win probability, the calculation would be:

$\frac{.886^{10.25}}{.886^{10.25}&space;+&space;.757^{10.25}}&space;=&space;79.9\%$

So Virginia would win that game about 80% of the time that they played against Michigan.

For "last four in" teams, I approximated their power level by finding the average Power Score of teams ranked 65-75 in AdjEM by Kenpom, and then plugged that into the 64 team bracket.

To simulate a game between two teams, I used a random number generator to spit out a random percent. If the percentage is lower than the expected win percentage of team A, then team A is considered to have won the game. If it's higher than the expected win percentage of team A, then team B is considered to have won. If the random number generator spits out anything between 0% and 79.9%, then the simulation credits UVA with the win, but if the random number generator spits out a number over 79.9%, then Michigan gets credit for the win.

I scraped this year's bracket and then ran a simulation for each first-round game. Then, I took the winners from each of those games and pitted them against their appropriate opponents, and ran another simulation, and repeated until I had simulated the entire tournament.

Then I did that 99,999 more times.

The result? Masses of dead digital basketball players, killed from the exhaustion of being forced to play several million basketball games in the span of an hour, and round by round probabilities for each team! The results are as follows:

#### Odds to reach the round of 32:

SchoolRound of 32
Virginia (1)99.79%
North Carolina (2)99.31%
Duke (2)99.29%
Purdue (2)99.27%
Tennessee (3)98.00%
Kansas (1)97.95%
Michigan St. (3)97.49%
Cincinnati (2)97.43%
Villanova (1)97.06%
Auburn (4)96.10%
Texas Tech (3)96.05%
Gonzaga (4)94.77%
Wichita St. (4)94.68%
Michigan (3)90.75%
Xavier (1)88.16%
Ohio St. (5)88.10%
Arizona (4)84.77%
West Virginia (5)84.21%
TCU (6)82.12%
Florida (6)81.35%
Clemson (5)77.55%
Houston (6)76.28%
Kentucky (5)73.05%
Texas A&M (7)72.49%
Virginia Tech (8)62.54%
Creighton (8)61.38%
Butler (10)60.33%
Seton Hall (8)60.07%
Miami FL (6)52.57%
Florida St. (9)51.42%
Last In (11/16)51.30%
Rhode Island (7)50.06%
Oklahoma (10)49.94%
Missouri (8)48.58%
Loyola Chicago (11)47.43%
North Carolina St. (9)39.93%
Arkansas (7)39.67%
Kansas St. (8)38.62%
Texas (10)37.74%
Alabama (9)37.46%
Providence (10)27.51%
Davidson (12)26.95%
San Diego St. (11)23.72%
New Mexico St. (12)22.45%
Murray St. (12)15.79%
Buffalo (13)15.23%
South Dakota St. (12)11.90%
Montana (14)9.25%
Marshall (13)5.32%
UNC Greensboro (13)5.23%
Stephen F. Austin (14)3.95%
College of Charleston (13)3.90%
Georgia St. (15)2.57%
Bucknell (14)2.51%
Penn (16)2.05%
Wright St. (14)2.00%
Cal St. Fullerton (15)0.73%
Iona (15)0.71%
Lipscomb (15)0.69%
UMBC (16)0.21%

This is fairly straightforward - it's essentially the odds of one team winning while facing the other. Take Providence vs. Texas A&M for example. Texas A&M's Pythagorean expectation says that they should beat Rhode Island 72.48% of the time, and the simulated results of that game played over and over again have the Aggies winning 72.49% of the time - virtually identical as predicted.

Where could we see some first-round upsets? Xavier seems to be fairly weak for a #1 seed, and for what it's worth, they rank only 15th in the country in Power Score, but they're also up against either Texas Southern (#247) or North Carolina Central (#309 in the nation). Our approximated Last-In value severely overrates these teams - while Xavier is undoubtedly the weakest #1 seed, it seems like their struggles would probably not come against schools in the lower-half of skill nationwide.

There is some bona-fide upset material here though! Miami (6) won only 52.57% of their games against Loyola (11), and Butler (10) is actually favored heavily over Arkansas (7). Other than that, it's business as usual - good teams beat bad teams most of the time. It's just the way it happens.

Onto the Sweet Sixteen, where things start getting dicey.

#### Odds to reach the Sweet Sixteen

SchoolSweet Sixteen
Virginia (1)92.70%
Duke (2)91.78%
Villanova (1)90.33%
North Carolina (2)83.07%
Purdue (2)83.05%
Cincinnati (2)81.51%
Michigan St. (3)78.68%
Tennessee (3)75.09%
Kansas (1)74.91%
Xavier (1)65.75%
Texas Tech (3)65.40%
Gonzaga (4)64.02%
Michigan (3)60.21%
Auburn (4)55.19%
West Virginia (5)50.16%
Wichita St. (4)45.73%
Arizona (4)44.15%
Kentucky (5)43.43%
Clemson (5)39.16%
Ohio St. (5)34.17%
Houston (6)33.15%
Florida (6)31.65%
TCU (6)19.71%
Seton Hall (8)16.83%
Florida St. (9)16.11%
Missouri (8)14.73%
Texas A&M (7)14.35%
Miami FL (6)13.52%
Butler (10)11.53%
Loyola Chicago (11)11.28%
Davidson (12)9.67%
North Carolina St. (9)8.13%
Last In (11/16)8.11%
Virginia Tech (8)6.28%
New Mexico St. (12)5.43%
Arkansas (7)5.39%
Texas (10)5.29%
Creighton (8)5.24%
San Diego St. (11)4.95%
Oklahoma (10)4.12%
Rhode Island (7)4.04%
Murray St. (12)3.79%
Buffalo (13)2.75%
Providence (10)2.56%
Alabama (9)2.53%
Kansas St. (8)2.07%
Montana (14)1.68%
South Dakota St. (12)1.15%
UNC Greensboro (13)0.66%
Stephen F. Austin (14)0.41%
Marshall (13)0.32%
Georgia St. (15)0.30%
Bucknell (14)0.30%
College of Charleston (13)0.22%
Penn (16)0.13%
Wright St. (14)0.11%
Iona (15)0.05%
Cal St. Fullerton (15)0.03%
Lipscomb (15)0.03%
UMBC (16)0.00%
As one might expect, the top 16 teams are heavily favored to make the Sweet Sixteen, especially compared to the rest of the field. The worst 4th seed, Arizona, still made the Sweet Sixteen in 44% of the simulations. This isn't to say that the field will be solely the top 16 teams - only that they showed up the most often.

Note the drop-off in percentage appearance, however, for some of the favored upset teams like Butler and Loyola - both teams make it to the Sweet Sixteen just 10% of the time! Why? If Butler wins, they have the pleasure of running into Purdue (2) on the way to the Sweet Sixteen 99% of the time, and if Loyola wins, they usually run into Tennessee. Ouch.

Did someone say "Elite Eight"? It's Elite Eight time.

#### Odds to reach Elite Eight

SchoolElite Eight
Virginia (1)81.78%
Villanova (1)76.24%
Purdue (2)60.35%
Duke (2)60.28%
Cincinnati (2)60.22%
North Carolina (2)52.72%
Kansas (1)46.82%
Gonzaga (4)44.47%
Michigan St. (3)34.31%
Xavier (1)28.81%
Michigan (3)28.76%
Tennessee (3)28.52%
Auburn (4)25.71%
Texas Tech (3)24.89%
Ohio St. (5)19.40%
Clemson (5)17.60%
Houston (6)12.67%
West Virginia (5)11.87%
Wichita St. (4)8.86%
Florida (6)8.39%
Kentucky (5)7.57%
Arizona (4)6.82%
Seton Hall (8)6.32%
Butler (10)4.56%
Texas A&M (7)4.48%
TCU (6)4.02%
Florida St. (9)3.59%
Missouri (8)3.11%
Miami FL (6)2.44%
North Carolina St. (9)2.43%
Creighton (8)2.18%
Virginia Tech (8)2.05%
Loyola Chicago (11)1.89%
Texas (10)1.62%
Arkansas (7)1.57%
New Mexico St. (12)1.12%
Davidson (12)0.87%
San Diego St. (11)0.82%
Last In (11/16)0.77%
Kansas St. (8)0.68%
Rhode Island (7)0.65%
Oklahoma (10)0.64%
Alabama (9)0.59%
Providence (10)0.39%
Murray St. (12)0.26%
South Dakota St. (12)0.19%
Montana (14)0.17%
Buffalo (13)0.11%
UNC Greensboro (13)0.09%
Stephen F. Austin (14)0.02%
Georgia St. (15)0.02%
Bucknell (14)0.01%
College of Charleston (13)0.01%
Marshall (13)0.01%
Penn (16)0.01%
Wright St. (14)0.00%
Iona (15)0.00%
Cal St. Fullerton (15)0.00%

Failed to reach Elite Eight in simulations: Lipscomb (15), UMBC (16)

Our first casualties! In all 100,000 simulations, neither Lipscomb nor UMBC reached the Elite Eight at any point. Some, like Cal State Fullerton, Iona, and Wright State, made it only once. And then there's Virginia and Villanova, each reaching the Elite Eight in more than 75% of simulations (if you don't have either team in your bracket, you need to take a long, hard look at yourself in the mirror).

I find it surprising (and at the same time, unsurprising) that Kansas has relatively poor odds of reaching the Elite Eight compared to their fellow #1 seeds - it's the fate of sharing a bracket with Duke.

It's the Final (Four) Countdown! Ba-da-da-da, ba-da-da-da-da, ba-da-da-da...

#### Odds to reach Final Four

SchoolFinal Four
Virginia (1)61.51%
Villanova (1)56.56%
Duke (2)46.30%
North Carolina (2)31.78%
Purdue (2)25.65%
Gonzaga (4)24.39%
Cincinnati (2)24.09%
Michigan St. (3)23.67%
Kansas (1)15.88%
Michigan (3)15.27%
Xavier (1)12.34%
Ohio St. (5)7.91%
Tennessee (3)7.38%
Texas Tech (3)6.66%
Auburn (4)6.54%
Houston (6)5.43%
West Virginia (5)4.84%
Clemson (5)4.28%
Wichita St. (4)3.12%
Kentucky (5)2.60%
Arizona (4)2.13%
TCU (6)1.56%
Florida (6)1.55%
Texas A&M (7)1.26%
Seton Hall (8)1.04%
Butler (10)0.81%
Florida St. (9)0.73%
Missouri (8)0.62%
Creighton (8)0.51%
Virginia Tech (8)0.48%
North Carolina St. (9)0.28%
Miami FL (6)0.28%
Loyola Chicago (11)0.20%
Texas (10)0.18%
Arkansas (7)0.18%
Rhode Island (7)0.17%
Oklahoma (10)0.16%
San Diego St. (11)0.15%
Davidson (12)0.14%
Kansas St. (8)0.10%
New Mexico St. (12)0.10%
Alabama (9)0.09%
Last In (11/16)0.08%
Providence (10)0.05%
Murray St. (12)0.03%
South Dakota St. (12)0.03%
Montana (14)0.02%
Buffalo (13)0.01%
UNC Greensboro (13)0.01%

Failed to reach Final Four in simulations: Lipscomb (15), UMBC (16), Cal State Fullerton (15), Wright State (14), Penn (16), Marshall (13), College of Charleston (13), Bucknell (14), Georgia State (15), Stephen F Austin (14), Iona (15)

Almost all of our 14+ seeds have fallen! And in true #GoACC fashion, three of our top four teams are ACC teams. There's a considerable gap between the top three teams (Virginia, Villanova, Duke) and the fourth best team (UNC). In bracket building, it looks like the Final Four will most likely consist of our top three, plus a surprise mystery team (BAW GAWD, THAT'S RHODE ISLAND'S MUSIC!).

We're almost there! Which teams reach the championship game?

#### Odds to reach Championship Game

SchoolChampionship Game
Virginia (1)48.43%
Villanova (1)38.35%
Duke (2)24.71%
Cincinnati (2)15.15%
Purdue (2)13.04%
North Carolina (2)11.35%
Michigan St. (3)10.12%
Gonzaga (4)8.38%
Kansas (1)5.22%
Michigan (3)4.57%
Tennessee (3)3.27%
Xavier (1)3.13%
Texas Tech (3)2.27%
Ohio St. (5)1.90%
Auburn (4)1.63%
West Virginia (5)1.60%
Houston (6)1.22%
Clemson (5)1.01%
Kentucky (5)0.97%
Wichita St. (4)0.84%
Arizona (4)0.70%
Florida (6)0.37%
TCU (6)0.32%
Butler (10)0.17%
Texas A&M (7)0.17%
Seton Hall (8)0.15%
Creighton (8)0.13%
Virginia Tech (8)0.09%
Missouri (8)0.09%
Florida St. (9)0.08%
Miami FL (6)0.06%
Texas (10)0.04%
Loyola Chicago (11)0.03%
North Carolina St. (9)0.03%
Arkansas (7)0.03%
Davidson (12)0.03%
Rhode Island (7)0.02%
Oklahoma (10)0.02%
Kansas St. (8)0.02%
San Diego St. (11)0.01%
New Mexico St. (12)0.01%
Alabama (9)0.01%
Last In (11/16)0.01%
Providence (10)0.00%
Murray St. (12)0.00%
Failed to reach Championship Game in simulations: Lipscomb (15), UMBC (16), Cal State Fullerton (15), Wright State (14), Penn (16), Marshall (13), College of Charleston (13), Bucknell (14), Georgia State (15), Stephen F Austin (14), Iona (15), UNC Greensboro (13), Buffalo (13), Montana (14), South Dakota State (12)

I hope you have UVA or Villanova in the title game - there's a 68% chance to see either team in the title game according to our simulations.

Alright, here's what you've been waiting for: how frequently teams have won our NCAA championships!

#### Odds to win Championship Game

SchoolChampions
Virginia (1)30.38%
Villanova (1)23.48%
Duke (2)13.23%
Cincinnati (2)6.79%
Purdue (2)5.85%
North Carolina (2)4.36%
Michigan St. (3)4.58%
Gonzaga (4)3.13%
Kansas (1)1.68%
Michigan (3)1.41%
Tennessee (3)0.90%
Xavier (1)0.83%
Texas Tech (3)0.64%
Ohio St. (5)0.48%
Auburn (4)0.41%
West Virginia (5)0.44%
Houston (6)0.29%
Clemson (5)0.25%
Kentucky (5)0.20%
Wichita St. (4)0.17%
Arizona (4)0.16%
Florida (6)0.08%
TCU (6)0.07%
Butler (10)0.02%
Texas A&M (7)0.03%
Seton Hall (8)0.02%
Creighton (8)0.01%
Virginia Tech (8)0.01%
Missouri (8)0.01%
Florida St. (9)0.01%
Miami FL (6)0.01%
Texas (10)0.01%
Loyola Chicago (11)0.00%
North Carolina St. (9)0.00%
Arkansas (7)0.00%
Davidson (12)0.00%
Rhode Island (7)0.01%
Oklahoma (10)0.00%
Kansas St. (8)0.00%
San Diego St. (11)0.00%
New Mexico St. (12)0.00%
Alabama (9)0.00%
Last In (11/16)0.00%
Failed to reach Championship Game in simulations: Lipscomb (15), UMBC (16), Cal State Fullerton (15), Wright State (14), Penn (16), Marshall (13), College of Charleston (13), Bucknell (14), Georgia State (15), Stephen F Austin (14), Iona (15), UNC Greensboro (13), Buffalo (13), Montana (14), South Dakota State (12), Providence (10)

Virginia, Villanova, and Duke have double-digit championship percentages, and everyone else is playing for hope. All I can do is pray for someone to knock off Duke and save us from misery.

If you're interested in the code and raw figures I used to calculate this, it's up on github here! Have fun with it, and as always, to HELL with georgia!

## Friday, March 2, 2018

### Tespa Hearthstone Collegiate Championship Meta Report: Week 1

As an analytically minded person, I'm very aware of the success one can find in using analytics to make decisions and how it translates to real-world success. I've written mostly about the impact analytics can play in baseball, but analytics has its place in Hearthstone as well - an esport that is near and dear to my heart. As a participant in the 2017 Tespa Hearthstone Collegiate Championship tournament, I realized that I could use data to my advantage and to others' advantage, and so I've prepared this meta report on the Tespa meta to explore how the tournament is shaping up.

### Tournament Format

For those of you unfamiliar with the format of the Tespa tournament, I will briefly explain - teams from schools across the country compete in a Swiss-style round-robin tournament format for the regular season of the tournament. In order to qualify for the next stage of the tournament, teams must go at least 5-2 in regular season play. Once a team reaches 3+ losses, they are not assigned any further matches and are considered eliminated from the tournament.

Matches are played online against opponents via the Hearthstone Friend system. Teams bring four standard decks, one from a different class, and their opponents ban one deck after viewing their opponent's decklist simultaneously. Teams then queue up with one of their remaining decks and play a best-of-5 match. Once a team wins with a deck, that deck is banned from the rest of the match - hence, a team must win at least one game with each unbanned deck in order to win the match.

### Deck Data

From scraping the deck data, it was quite obvious which classes were most popular - by a considerable margin. Of 773 teams for whom deck data was available, 366 brought exactly Priest, Mage, Warlock, and Paladin - 47% of all teams. Chances are if you didn't bring that exact lineup in week 1, your opponents did. This is very much in line with the current ranked meta in Standard - per Metastats.net, 6 of the top 7 archetypes on ladder belong to those four classes, and according to HSReplay.net, Priest, Mage, Warlock, and Paladin are the top four most played ladder decks (or they were before the ranked ladder went down for maintenance).

#### Classes of Decks Brought, Week 1

Almost every team brought some form of a Warlock deck - 93% of all teams! The second most-brought class was Paladin at 81%. As could be inferred from above, not many teams brought Warrior or Shaman decks, though teams certain did attempt to spice things up in their fourth deck slot - Rogue, Hunter, and Druid saw a good amount of appearances in decklists for week one.

#### % of Teams Bringing a Class, Week 1

There also very little diversity with regards to archetypes. The dominant archetypes of the dominant classes were out in full force, with complementing variants. The most popular archetypes were Secret Mage, Murloc Paladin, Cube Warlock, Spiteful Priest, Control Warlock, Dragon Priest, and Aggro Paladin. These archetypes comprise, again, of 6 of the top 7 archetypes according to Metastats.net, so it's unsurprising to see them represented in such force.

In terms of bring rates, Secret Mage was brought by 54.2% of all teams, Murloc Paladin was brought by 53.6% of all teams, Cube Warlock was brought by 52.9% of all teams, Spiteful Priest was brought by 41% of all teams, and then no other archetype was brought by more than 29% of all teams. Of note at the bottom of the list, to say that some teams brought unique and unusual decks would be an understatement - I recorded at least one Recruit Hunter, one Quest Druid, and one Buff-Paladin. One enterprising team brought a lineup consisting solely of C'Thun decks!

### Game Queuing Data

This year's Tespa Collegiate tournament is unique from previous years in that teams can now view opponents deck-lists before banning a deck, whereas previously, teams had only been able to view an opponent's class before banning a deck. We can contextualize this impact by looking at class ban rates versus archetype ban rates.

Note that our ban-rate data only includes winning classes. Tespa does not make specific ban-rate data publicly available, but we can infer the banned class of a winning team based on what three decks were used to win. However, given the nature of the conquest format, it is possible that the losing team might queue up with only two of their unbanned decks, or even one - hence, that data is absent. Thus, the ban-rate data may be skewed in that the teams represented in the ban rate data won their respective matches, and having a favorable class ban may have contributed to that. Still, the ban data is revealing regardless.

Below is the ban data for 318 Tespa matches, or 318 individual teams. Note that I recorded 410 games that took place last week, 92 of the matches either resulted in forfeits, failed to submit their decklists, or submitted corrupted decklists.

Warlock was the overwhelming favorite to be banned. Despite being as comparably popular as classes like Priest, Mage, and Paladin, Warlock was banned nearly three times as frequently as the next-most-frequently banned class, Paladin. By itself, Warlock accounted for 49% of all bans. While partly a result of teams bringing Warlock most frequently, the high ban-rate for Warlock is indicative of teams' fear of Cube Warlock.

#### Banned Classes, Week 1

But looking at the archetype ban data, it becomes apparent that teams were either not aware that they could look at decklists prior to matches, or were not aware of the matchups surrounding them. Below shows how frequently a Warlock archetype was banned when it was brought by a team. You'll notice that the ban-rates for Cube Warlock are very similar to that of its other archetypes.

#### Warlock Archetypes Ban%, Week 1

However, Control Warlock matches up rather poorly against the three most popular decks from other classes - Spiteful Priest, Murloc Paladin, Secret Mage.

Spiteful PriestMurloc PaladinSecret Mage
Cube Warlock47%51%41%
Control Warlock35%59%39%
Zoo Warlock52%49%45%

While Control Warlock suffers poorly in the Murloc Paladin matchup, its ladder WR% suggest that it can be more easily farmed for wins by Spiteful Priest and Secret Mage, thanks to the nature of the Conquest format. Hence, it is more beneficial for teams running those archetypes (of which there are many in this tournament) to avoid banning Control Warlock, since it looks like a weaker archetype than Cube Warlock. In this sense, this indicates that many players were either ignorant of their ability to look up their opponents deck-list prior to the match, or ignorant of the matchup between Control Warlock and the meta-decks of the tournament.

Note, however, that we are using ladder win rates to compare matchups, which might not be representative of the tournament meta. We will return to this subject later in this article.

As discussed before, it looks beneficial for teams running the most popular decks to avoid banning Control Warlock. But note that Murloc Paladin, the most popular deck in the tournament, suffers heavily against Control. Therefore, teams will want to avoid accidentally queuing into Control Warlock with Murloc Paladin. While the frequency of teams queuing up with decks in later games of the match may vary, we can look at what class teams most frequently used to queue up with first.

For the most popular archetypes, I looked at which archetypes I knew were available to teams (i.e. all decks that I knew were unbanned - hence we have a case of winners bias again), and found how frequently teams who brought that deck queued up first with it. Teams with Warlock available brought that class out first most frequently, 75% of the time for Cube Warlock, 63% of the time for Control Warlock, and 59% of the time for Zoo Warlock. Teams with Jade Druid led with that deck most frequently after that, queuing up first with it 41% of the time.

#### % Of Teams with a given unbanned archetype going 1st with it, Week 1

Thus, teams with decks that are susceptible to losing to a given Warlock archetype and have not banned their opponent's Warlock should consider avoiding queuing with that deck first to avoid the matchup. For example, Zoo Warlock has a slight advantage over Spiteful Priest, so teams running Spiteful Priest against Zoo Warlock should avoid queuing with Spiteful Priest first.

### Class and Archetype Win Rates

Which are the most powerful decks in the current Tespa meta? To examine this question, I looked at all games of the tournament and compared each class and deck archetype's win rate to each other. First, the classes.

Paladin, not Warlock, surprisingly emerges on top. This makes sense, however - Paladin was not as frequently banned as Warlock, and Murloc Paladin is arguably a better archetype on ladder than any current Warlock archetype. Warlock has a sub-50% win rate, however - on the surface, this makes little sense for a class that was brought by 90%+ of all teams and banned in almost 50% matches. There may be multiple factors causing Warlock to have a sub-50% WR. For starters, the conquest format allows teams to simply avoid Warlock decks that they do not want to face so teams that may be unprepared to face Warlock can avoid doing so. And the decks that might face off against Warlock might see techs designed to target Warlock. In that sense, it's impressive that Warlock has a 49.7% WR in the face of all that.

#### Class Win Rates, Week 1

Perhaps Warlock's sub 50% win rate also comes from the saturation of archetypes weaker than the much-feared Cube Warlock. Cube and Zoo Warlock posted win rates of 51.3% and 51.2% respectively, but Control Warlock had a rougher go at it, posting a win rate of 48.2%.

Several surprising things pop out regarding the Archetype Win Rates. Aggro Paladin posted a win rate ~3% higher than Murloc Paladin, likely due to teams running hungry crabs in their deck. Spell Hunter, despite being brought by only 130 teams, posted a stellar 58% win rate. And Dragon Priest was one of the worst archetypes among the most popular decks, posting a 46% win rate.

#### Archetype Win Rates, Week 1

In terms of matchups, it's not hard to see why Spell Hunter was so successful - its only weak matchups were against Jade Druid and Control Warlock. Consider, however, that the small sample size of decks involving Spell Hunter may allow random variance to affect the results of this survey, - Spell Hunter has a negative win-rate against most of the major archetypes except for Secret Mage according to Metastats.net's ladder results.

For/AgainstSpell HunterDragon PriestAggro PaladinJade DruidZoo WarlockMurloc PaladinCube WarlockControl WarlockSpiteful PriestSecret Mage
Spell Hunter50%50%50%33%50%56%83%33%56%70%
Dragon Priest50%50%25%20%33%44%47%50%15%59%
Zoo Warlock50%67%0%43%100%50%44%
Cube Warlock17%53%40%80%0%53%50%40%56%38%
Control Warlock67%50%83%0%38%60%50%40%29%
Spiteful Priest44%85%0%67%50%31%44%60%50%59%
Secret Mage30%41%50%35%56%44%62%71%41%50%

### Tech Card Success Rates

Given the prevalence of Warlock, Paladin, and Druid, it would seem to be in teams' benefits to run tech cards to counter each archetype. Specifically, most Warlock archetypes are countered by decks running Spellbreaker, most Paladin archetypes are countered by Hungry Crab, and most Druid archetypes are countered by Skulking Gheist. To see how tangible the impact of running these cards had on these matchups, I looked at the win rates of decks running these cards against these classes.

The impact of playing Hungry Crab and Geist is immediate against Paladin and Druid, but Spellbreaker does not appear to be as useful of a tech card against Warlock archetypes in the Tespa meta, with its win-rate just about equal to the win rate of decks playing Warlock otherwise. Spellbreaker still has its uses in the tournament, however, in countering other decks especially given the current silence meta on ladder - decks running Spellbreaker in the tournament had a .628 win rate overall.

CardAgainst DeckWin Rate
SpellbreakerWarlock0.495
Skulking GeistDruid0.583

### Methodology

I'll conclude my report with a brief explanation of how I scraped the data, and some considerations regarding the data.

My data is essentially in two sections: the decklists from week one (of particular note now that teams can see opponents decklists prior to the match!) and the Tespa HS Championships match results (scraped from the tournament site). I scraped both sources and classified each deck based on its class and archetype (I looked at whether or not core cards of an archetype were present in the deck based on the 24 most popular archetypes from metastats.net - so if a deck contained "Carnivorous Cube", "Voidlord', and "Doomguard", I classified the deck as "Cube Warlock"). My scraper, along with my week 1 data (decklists, archetype classifications, etc.) is available on my github here

Unfortunately, there's a lot of incomplete data. Many teams failed to submit deck-lists for week 1 and thus forfeited. I also encountered one or two teams with corrupt deck-lists, or only three decklists submitted. In scraping match results, I counted games involving these teams as forfeits. There were also many forfeits reported on the match site itself by way of no-show. Analysis involving decks brought to the tournament may involve teams who were involved in a forfeit, but match data analysis does not take into account forfeits. In all, about 22.4% of matches scraped were forfeits.

There are sadly some limits to the data. I do not have access to play-by-play data as one might have from HSReplay.net, so I can't perform really advanced calculations with regards to win rate when drawing a particular card - fortunately, a game of Hearthstone in the tournament will play out roughly the same as a ladder game of Hearthstone at a high ranking. If you're interested in the particulars of that data, HSReplay has it readily available and I highly recommend their site (and especially their python package!). Fortunately, the data that I can find is extremely useful and fairly unique to the tournament. Hopefully, you learn something that gives you an edge (though not against my team!).

### One Brief Note

If you've made it this far, I just wanted to say thanks for reading - this report was a lot of time and effort. If you're impressed with this, feel free to check out the rest of my site for baseball analytics. I'm a freelance/partially employed baseball writer/analyst with interest in doing analysis for esports and baseball. If you want me to write for your site, please contact me on twitter at https://twitter.com/John_Edwards_! I'd love to write for you.