by Jess Behrens
© 2005-2019 Jess Behrens, All Rights Reserved
Many of my earlier posts included one or several Seaborn Clustermaps as an illustration, which I then used to make points about how the different tournament years in my study clustered together. These groupings of tournaments were also tied to conclusions about which index/rank combinations were 'favored' in a given year with the implicit assumption that the effected teams realized strength was a direct result of the species breakdown in that given year.
I'm going to re-visit one of those, with the same goals, but also the caveat that the previous clustermaps were based upon flawed Evolutionary Game Theory Species' designations. Those designations were built off of a network weighting methodology that over-weighted specificity to the detriment of sensitivity. As I pointed out in recent posts, I've removed many of the problematic 'queries' I use to build the network & restructured how the network is weighted based on these queries.
The clustermap I am presenting here was created in the same manner as was previously done: using the python library Seaborn. It is based on the results of the same 20,000 Evolutionary Game Theory simulations used to construct the Lotka-Volterra figures presented in Lesson 3. The clustermap shown in Figure 1 includes tournament year as both a column and a row. This is not a mistake; it reflects the fact that regressions of each species to every other species are being run in both directions. So, for example Doves are used as both the dependent & independent variable for Hawks (Owls, Dove-Owls, etc.) in separate regressions for each of the 15 tournaments. Consider 2005 & 2006. For any given combination of EGT species' energy total, the relationship between 2005 & 2006 is included twice, once for scenario 1 (2005 -> 2006) & once for scenario 2 (2006 ->2005). In both of these cases, the first year is the 'row' year and the second the 'column' year. This clustermap and any additional clustermaps I present in the future will use either the regression slope or r-value as the basis for clustering.
In a departure from the previous clustermaps, four additional energy totals, which are derived from the energy totals developed during the EGT simulations, were also included as 'species' for regression. Inclusion of these 4 totals
Table 1. Additional Totals Included as 'Species' in Regression Analysis
was done because the EGT simulations are not a true tournament. Three of the four EGT species share energy as part of their 'strategy'. In each randomly generated 6 round iteration of the simulation, much of the energy would still be available for 'sharing' if it were allowed to continue. If some of the unique features of the tournament do derive from the interaction of the EGT species as they are defined here, then perhaps using this 'leftover' energy as another 'species' in the regression analysis could help identify significant tournament phenomena? As you can see in Table 1, the Owl Total energy is the beginning point for each of these systemic, derived values. This is because they are the dominant species in the tournament, as was shown in the Lotka-Volterra plots.
These clustermaps will be presented a bit differently from previous posts. I will back up my assertions with statistical evidence that shows:
1. The seeds of teams included in each group being compared are NOT statistically different (T-Test).
2. The number of wins earned by the teams in each group ARE statistically different (T-Test)
My hope is that doing this will add 'weight' (statistics pun intended - get it? get it?) to the conclusions.
As I've demonstrated throughout all of these posts, Hawks & Owls are, by design, are the strongest of the EGT strategies'. And, as you might expect, a clustermap involving these two species' can be used to make some significant predictions about specific Index/Ranks combinations.
Figure 1. Seaborn Clustermap, Owl vs. Hawk, Regression r-value
The first, and probably the most important, involves the location of the tournament champion & runner-up. As you can see from the hierarchical net at the top divides the columns into 3 primary 'clusters'. From left to right, the first begins at 2015 - 2019, the second from 2005 - 2013, & the thrid from 2009 - 2018. Of the 15 tournaments for which I have data, 9 of the champions or runner-ups fall on a single index/rank combination - Index 44, Rank 3. If you look at Figure 1, from right to left, those are the years running from 2018 to 2010. The lone exception is 2017, when Kansas, who was on the rank, lost to Oregon in the Elite 8. Note: Because in this case tournament champion & runner-up are community (system) wide phenomenon, I'm not going to present a T-Test of the seeding because it involves the entire tournament. The significance of having 9 out of 30 tournament champions/runners-up land on precisely the same rank, is p < 0.000001, however.
Within the 9 tournament years just mentioned, three of the teams falling on this rank were runner-up (2005, 2008, & 2007). And, of course, as tournament historians will know, in 2008, overtime was needed to determine a winner. The team who occupied Index 44 /ank 3, Memphis, still lost, but the fact that they pushed Kansas to overtime argues in favor of the pattern. Furthermore, of the 10 years I mentioned (subtracting out 2017 for the reason that it definitely doesn't follow this pattern, as noted earlier), six of the 9 years include teams with the EXACT SAME RANKS playing in the final - Index 44, Rank 3 & Index 44, Rank 7. Those years are 2005, 2007, 2008, 2009, 2010, & 2012. Of the remaining 3, 2018 & 2013 include teams that are off by 1 (Index 44, Rank 3 & Index 44, Rank 11 or Rank 12). My assertion is that the competition within the tournament during these years is primarily a function of the percentage of Owls & Hawks. Further, it is the specific breakdown of these two species that produces this pattern of Tournament finalist location irrespective of seeding. In other words, a specific breakdown of Hawks & Owls that fall within the group leads to a degree of competition within the tournament that 'selects' the team who falls on Index 44, Rank 3.
The second Tournament feature that is born out by Figure 1 involves the presence of a tournament dark horse. Dark horse teams typically fall in small, specific Index/Rank ranges within my data vector. But a special run doesn't occur every year. I say all of this to mean that there are candidates who COULD be that team each year, but some years a run fails to materialize. As I will show you, two of those 'special' ranges correlate with the clusters shown in Figure 1. Among the teams you may recall who fall within these ranges are 2011 VCU, 2014 Dayton, 2017 South Carolina, 2016 Notre Dame, 2018 Kansas State & Florida State.
If we divide Figure 1 into two clusters of columns, left & right, the left would run from 2015 - 2019 & the right from 2005 - 2018. The Index/Rank range in which the teams I'm describing here lie is Index 16, Rank 18-28. If you use a 2 Tailed T-Test to analyze the seeding of the teams that fall in these ranges divided into two arrays based on the two clusters in Figure 1, the significance is p < 0.41. There is no statistical difference. The teams that fall in this range are, essentially, from the same population of seeds regardless of tournament year. They run from a 1 seed to two 13 seeds.
The range Index 16, Ranks 18-28 further divides into 2 groups:
1. Index 16, Ranks 18-22; Seed T-Test, 2 tailed - p < 0.91
2. Index 16, Ranks 23-28; Seed T-Test, 2 tailed - p < 0.56
Both of these, again, are not statistically different in terms of the distribution of seeds as the above list points out. However, when you then move to compare tournament results of the teams falling in these two ranks, the difference does become significant. Comparing the two ranges to one another within each cluster:
1. Cluster 1, 2 Tailed, T-Test of Wins - Index 16, Ranks 18-22 vs. Index 16, Ranks 23-28: p < 0.008
a. Ranks 23-28 do better than Ranks 18-22 on average.
2. Cluster 2, 1 Tailed, T-Test of Wins - Index 16, Ranks 18-22 vs. Index 16, Ranks 23-28: p > 0.10
b. Ranks 18-22 do better than Ranks 23-28 on average.
Of the two, Cluster 1 is far more significant, but significance is still (slightly) present in Cluster 2, with the success of each range switched based on Cluster. When you examine the Hawk/Owl breakdown in terms of total percent of the tournament, Cluster 1 averages more Hawks than Cluster 2. Most of the teams that fall in Index 16, Ranks 23-28 &
1. Cluster 1, Index 16, Ranks 18-22: Total 10 Hawks; Expected 7.21 Hawks; Poisson < 0.89
2. Cluster 2, Index 16, Ranks 18-22: Total 2 Hawks; Expected 3.17 Hawks; Poisson < 0.39
3. Cluster 1, Index 16, Ranks 23-28: Total 9 Hawks; Expected 8.65 Hawks; Poisson < 0.64
4. Cluster 2, Index 16, Ranks 23-28: Total 3 Hawks; Expected 3.81 Hawks; Poisson < 0.48
who make these surprise long runs are Hawks. As you can see, the Poisson value for Hawks within these ranges conforms to expectations, given their total percentage across the years included in each of the clusters. This, of course, reflects the fact that Hawks are a greater percentage of the community of teams in Cluster 1 than in Cluster 2, which means that many of the 'extra' Hawks fall in this range. Thus, given that the seeding distribution is not significantly different, the difference in tournament narrative often hinges on the breakdown of Hawks & Owls, with teams in one range favored in years with more Owls & the other favored in years with more Hawks. Many of the 'huge' tournament stories seem to be tied to Evolutionary Game Theory species' breakdown.
Another range that conforms to the 2 Clusters seen in Figure 1 falls on Index 23, Ranks 62-63. Like the two ranges identified in Index 16, the teams who fall on these ranks are associated with long, dark horse tournament runs. Some
1. Index 23, Ranks 62-63; Seed T-Test, 2 tailed - p < 0.22
2. Index 23, Ranks 62-63; Wins T-Test, 2 tailed - p < 0.05
3. Cluster 1; Index 23, Ranks 62-63; Total 4 Hawks; Expected 2.88; Poisson < 0.84
4. Cluster 2; Index 23, Ranks 62-63; Total 2 Hawks; Expected 1.27; Poisson < 0.87
of the teams that landed on these ranks are 2016 Syracuse, 2015 Michigan State, & 2017 Xavier. However, unlike the Index 16 example, there is only one range that is split between the two clusters. Cluster 1 teams do very well & Cluster 2 do only average. As with the two Index 16 ranges, the number of Hawks conforms to expectations.
The point of all of this, again, is to show that how Evolutionary Game Theory is a powerful tool for understanding important Tournament phenomena. It also points to the fact that the output from regression analysis of the EGT simulation results can be used to find 'Clusters' of tournament's where the balance of certain species' (in this case, Hawks & Owls) within the tournament community as a whole 'selects' some teams over the rest of the community. I plan to present more of these clustermaps in a similar form if doing so identifies additional tournament phenomena.