Our Recent Posts

Tags

No tags yet.

Hawks, Doves, & Owls: How the NCAA Men's Basketball Tournament is Like an Election - Chp. 15


by Jess Behrens

© 2005-2018 Jess Behrens, All Rights Reserved​

Typically, when we think of competition, we believe that 'the best team wins'. Or, 'the most dominant player wins'. Or 'the best idea wins'. I've spent the 14 blog posts showing how, at least as far as the NCAA Men's Basketball Tournament is concerned, this belief isn't entirely true. It's partially true, of course, because the entire take away of all this work is that the tournament winner is predictable, and consistency in prediction equates to the best team winning.

Having said that, the winner can vary; there can be more than one 'best team'. And much like a democratic election (that wasn't stolen by a foreign powers' interference in a few states in the upper midwest - all of which was made possible by an antiquated method for choosing the winner, given that the popular vote winner was not elected), the tournament champion is actually one of a number of candidates. In this case, the votes don't occur at a ballot box, but instead are the result of each teams season of wins & losses, as well as how their season interacts with the seasons of all other tournament teams. It is this 'tournament as a whole before self' concept that determines their grouping as a Hawk, Owl, Dove, or Dove-Owl. And it is the balance of these four species types that determines which team will win it all. Side note: This 'democratic' method is an unintentional and ironic one for any basketball team or player. It is one that the teams would gladly dispose of if it meant that they could truly 'settle it on the field' independent of their experience over the course of that season relative to the experience of all other teams.

Over this series of posts (14 and counting), I've done quite a few things. But, in general, they revolve around these key themes:

  1. The linear nature of Hawk/Owl competition & how it affects which species is favored to win the tournament as well as the number of major first round upsets - (Chapter 6, Chapter 8, & Chapter 11)

  2. Evolutionary game theoretic simulated fitness separation & how it affects which species is favored to win the tournament (Chapter 7 & Chapter 8 )

  3. The number of small conference champions who cluster & its effect on pushing an 'Underdog' to the later tournament rounds - (Chapter 9)

  4. How the 'Key Player' concept from traditional game theory can be used to make predictions across sports - (Chapter 10)

  5. Lotka-Volterra predator/prey dynamics can be used to provide insight into why & how evolutionary game theoretic fitness is affecting tournament results - (Chapter 13)

There is also a 6th point that has yet to be defined which relates to the method for determining the tournament champion. That point is: how all of this processing of the evolutionary game theoretic simulations relates back to the original indexes used to construct the network. Can all of these results be used to find the specific spot in the data vector (table of indexes) where the champion will sit, year after year? The answer is a definite, but qualified, yes.

In order to understand how & when this forced democracy works, I will summarize the overall methodology again to refresh your memory:

  1. Collect the season tournament statistics for all teams invited to the tournament, standardize them, & create the 47 indexes - (Chapter 3)

  2. Using the queries, convert the indexes into two weighted, unipartite networks. One based on the tendency of a team to win at least one game & the other based on the tendency of a team to lose in the first round - (Chapter 3)

  3. Calculate the network centralities, including the Key Player Index, for both networks in Gephi & python. Using these network & game theory measures, split all teams into one of four species types: Hawks, Owls, Doves, & Dove-Owls - (Chapter 3 & Chapter 2)

  4. Using the breakdown of each species by percent, run 1000 iterations of an evolutionary game theoretic simulation based on a modified version of the Hawk/Dove game - (Chapter 4)

  5. Analyze these simulation results using niche theory, ecological game theory, & lotka-volterra predator/prey theories - (Chapter 6, Chapter 7, Chapter 8, Chapter 9, Chapter 10, Chapter 11, Chapter 12 Chapter 13)

As the figures in Chapter 3 show, many of the teams who either win or lose in the first round cluster together around specific Index & Rank combinations, a fact that is the principle behind the queries used to construct the two networks. This includes the tournament winners, who group together hundreds of times. For finding tournament winners, one of the indexes is far and away the most powerful: Index 44.

What makes Index 44 significant, and thus important to isolating which team will win it all, is that it was written as a density function; one that's modified, however, by another important factor. By that I mean that to calculate it, 3 of the statistics are added together to calculate the 'mass' of a teams season & another three are used to form the edges of a three dimensional box (x, y, z). It's value, then, before it is put in order and 'ranked', is nothing more than mass / volume that is subsequently modified by another statistic.

The reason I used a density function relates to my theory for how the tournament works as well as the method I use to standardize the statistics from the beginning, both of which relate to this idea of a democratic vote for who the tournament winner will be. I believe the tournament, and all competitive sports for that matter, to be much like a block of granite. The physical essence of that block, in this metaphor, are the statistics I use to construct the indexes. They constitute what is inside the block, waiting to be discovered. The math I use, then, is essentially a sledgehammer which shatters that block into tiny pieces, revealing shards whose dimensions are in the units of the base statistics used to construct the indexes. Like a block of granite that's hit with a sledgehammer, the breaks occur along fault lines that are already within the stone.

Thus, the finding the champion is nothing more than comparing each of these shards for similarity to previous years' champions. And Index 44 is the quickest (most parsimonious) tool to use in finding those champions. What I've yet to show, however, is that the game theory simulation results can be used to identify the precise ranks in this index where the champion will be for any given tournament year.

Figure 1 is another cluster map, similar to what I've been using from Chapter 11 on. It shows the slope of the best fit regression line of all possible two year combinations for all possible species


Figure 1. Cluster Map, All Tournament Years & All Species, Regression Slope

combinations (i.e. 2005 - 2006 Hawk & Owl, 2005 - 2006 Hawk & Dove-Owl, 2005 -2006 Hawk & Dove, 2006 - 2007 Hawk & Owl, 2006 -2007 Hawk & Dove-Owl, etc.). Because the measure used to create Figure 1 is a regression slope, tournament year is used for both the Index (vertical) & measure (horizontal) edges in Figure 1.

If you look along the bottom at the far left & far right, you'll see two clusters, as identified using the heirarchical lines that matplotlib/seaborn has placed along the top of Figure 1. On the left, the first cluster includes (2012, 2007, & 2013), while on the right the second includes (2010, 2018, 2015, 2009, & 2016). Each of the champions in these two clusters of tournaments, except for Florida in 2007, falls in a single query involving Index 44 Rank 3-4. The other teams that fall on these two ranks are from other years, and they do not win the tournament. Thus, the regression slopes, which describe the competition among species by all possible combinations of tournament year pairs 'pick' the champion for these two clusters of tournaments.

If you reconsider the fact that Index 44 is based on the idea that each teams season can be expressed in terms of its 'density', what these results are saying is that the competition over the regular season has created a tournament population of Hawks, Doves, Owls, & Dove-Owls whose competition (measured by the slope of the regression) is capable of supporting a given 'density' of champion. It's as if the other shards from the granite brick have all piled up and form enough of a support that the champion can rest on top of that pile.

I use those words intentionally, because within the remaining cluster (2011, 2006, 2008, 2005, & 2017), the champion is a couple of steps 'lighter' in density than in the other clusters. Actually, there are multiple considerations in these years, including how many small conference champions have clustered (Chapter 9). So, selecting the champion is not as precise for these years as it is in the two clusters associated with Rank 3-4. Having said that the teams still group together.

In 2005, 2008, & 2006, the champion is either Index 44 Rank 7 or Index 44 Rank 9, which is the same rank as the Florida team in 2007 that is the exception to Index 44 Rank 3-4 rule described above. Finally, in 2011 & 2014, the champion is, literally, at the median of the distribution in another index, Index 35, Rank 35, which is the same location as that 2006 UCLA that was a runner up to Florida.

Enough words. A picture is worth a thousand of them. Table 1 shows the 3 primary clusters, and should help illustrate how they are all related. Obviously, not all of the runner ups are listed in these three clusters. And 2006 Florida is missing as well. (along with 2011 Butler, 2013 Michigan, 2014 Kentucky, 2015 Wisconsin, 2016 North Carolina, & 2018 Michigan are all missing). There are other teams that should be included to present a full picture of how these clusters work together. For example, both Gonzaga & Arizona from 2015, who lost to Duke & Wisconsin, respectively, fall within the Index 44 Rank 7-9 block. Likewise, Duke from 2013, who lost to the champion Louisville in the Elite 8 that year, falls in this block as well. This suggests that the relationship isn't perfect; that seeding plays a role.


Table 1. Index 44 & Championship Clusters, by All Tournament Years & Species, Regression Slope

But the general relationship holds - the most dominant is Cluster 1, when the densest Champion (in terms of the density function) falls on Index 44 Rank 3 or 4. It is also the most precise. Years where the overall population distribution doesn't support the weight of a density as large as Rank 3-4 is Cluster 2, and includes Index 44 Ranks 7-9. Note that the years a champion falls between 7 & 9 are all years where the small conference champions cluster at 4 or fewer as well.

The last, and least dense, years are in Cluster 3. They fall outside of these two ranges, and always involve a team that is actually at the median of the tournament (Index 35 Rank 35). Each of the years in Cluster 3 has a small conference champion cluster of 5 teams. Florida from 2006 falls in this group as well, even though it isn't listed in Table 1. 2006 Florida falls on Index 44 Rank 10, which is by definition less dense because of how the Index works.


Table 2. Cluster 3, Final Four/Championship Hot-Spot

Table 2 illustrates how specific index ranges can vary from year to year, but seem to 'flag' when certain tournament structural factors are present. It shows Index 7, Ranks 22-30 & Index 45, Ranks 11-16. As you can see, it includes teams who have won the entire thing & teams who have lost in the first round.


Table 3. Cluster 3, Final Four/Championship Hot-Spot, Cluster 5 Years

When you limit the teams in Table 2 to only those years where 5 small conference champions cluster, it's strength increases, much like the Index 44, Ranks 3-4 query shown in Table 1. Table 3 shows the subset of Table 2 for years where 5 small conference champions have clustered (2006, 2011, 2014, & 2018). As you can see, all of the teams aside from Florida in 2011 & Duke in 2018, made the Final Four. If you recall, Florida lost to Butler in the Elite Eight. Likewise, Duke lost to Kansas, who is also in Table 3, in the Elite Eight. What's interesting about these two games is that they conform to the pattern where Hawks always win in the Elite Eight when they play an Owl. In both of these situations, Duke last year & Florida in 2011, the teams who lost were Owls & they played a Hawk (Butler 2011 & Kansas 2018). So, even within the exceptions, there is a pattern.

Returning to the topic of competition between the median and the 'densest' teams, the interplay between the median and the the two density based clusters found on Index 44 is a running theme in all tournaments. If you take the range round the median on Index 35, from Rank 32 to 36, and run a T-Test on the number of wins using the two groups as identified in Figure 1 above (Group 1 - 2007, 2009, 2010, 2012, 2013, 2015, 2016, & 2018; Group 2 - 2005, 2006, 2008, 2011, 2014, 2017), you get a significant result (p<0.05). The difference is even more pronounced if you compare the number of Elite 8 teams in each group vs. what would be expected at random across any 5 Ranks (32-36 is five ranks) in the database. (Group 1 - p<0.31; Group 2 - p<0.001). Thus, in years where the competition favors a very 'dense' team as champion (Cluster 1 in Table 1 above & Group 1 from Figure 1), the median teams are just average. But in years where the competition (Group 2 in Figure 1) hasn't developed, teams in this part of the vector are significantly strong as measured by the number of Elite 8 teams they produced (p<0.05). Table 2 shows some of the teams that fall at the median in Group 2 years.


Table 2. Median Teams by Figure 1 Grouping

As Table 2 shows, the median teams in Group 2 are much stronger than those in Group 1. There are 5 champions or runner ups in these years (p<0.00001) and a number of very strong teams who lost to the eventual champion (Wisconsin vs. North Carolina 2005; Xavier vs. Kansas 2008; West Virginia vs. Gonzaga 2017; & Villanova vs. North Carolina 2005). Conversely, the number of strong teams that fall within this group is much smaller in Group 1 & not significant (p<0.5). Of note is the fact that it's not that these teams aren't good in Group 1 years; it's that they aren't able to carry that strength past the first weekend at statistically significant levels. In fact, the only statistical significant relationship in Group 1 is the number of highly seeded teams that lose in the first round (p<0.001).

I believe what is happening here is simply this: In years where the competition doesn't support a very dense team as champion, the median (or average) team is stronger. In other words, competition 'reverts to the mean'.

So the system comes full circle - stats create indexes, which become two networks, which identify populations pulled from a different theory (Evolutionary Game Theory), whose simulations then point back to the original stats & indexes. That was the goal of this project - to verify the patterns I was seeing in the indexes & ranks using an accepted, published theory.

<--Chapter 13 Chapter 16-->

#KeyPlayer #Seaborn #NCAATournament #NCAA #MensCollegeBasketball #NetworkAnalysis #Matplotlib #ClusterMap #basketball #EvolutionaryGameTheory #MonteCarloSimulations

©2018 by jessbehrens.com. Proudly created with Wix.com