-
Posts
304 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Store
Everything posted by WheeloRatings
-
The AFL's API is available again so here is Richard Little's analysis of this week's game/round.
-
I'm not sure I'd go as far as saying there's a real problem. Geelong has had more scoring shots than Carlton and conceded fewer scoring shots over their 10 games, and Carlton has only won 50% of quarters. If Carlton's games against Hawthorn and Port Adelaide go much longer, they quite possibly lose both and they're in 8th. They've done very well but their ladder position probably flatters them a little at this stage. I provided some additional explanation in my other post:
-
A lot of the models will take into account margin of victory (as well as opposition and venue), not just simply win/loss. Carlton's percentage is the worst of the top 8 teams. They beat Hawthorn by 1 point, Port Adelaide by 3 points (with fewer scoring shots) and the Western Bulldogs by 12 but also with fewer scoring shots. They're all worth 4 points on the ladder but most of the models probably won't see a 1 point win as being much better than a 1 point loss. Luck is a factor in close games and if they lose to Port and Hawthorn, they're currently 8th on the ladder. The primary purpose of the ratings in these models is to predict future matches, not simply rank how teams have performed this year, and putting too much weight on a 1 point win or on recent matches doesn't help with predicting future matches. As such, the models are somewhat conservative but if Carlton is genuinely a top 4 team this year, that will most likely be reflected in the ratings later in the year. Another factor is that Carlton started the season from a lower base given the ratings are generally dependent on the previous season. This may still be impacting their rating in some models to some extent, although I haven't quantified how much that may still be impacting their rating in my model. Their rating in my model has gone from -4.3 after the loss to Fremantle in round 6 to +7.0 currently so they could be up to 4th by my model in a week or two at that rate (they're currently 7th).
-
Yeah the "Aggregate" Power Rankings is basically the average of the ratings for all the individual Squiggle models. Given all the models use different methods (and rating scales), the ratings are standardised and then averaged. Here is a graphic I had created from the Squiggle API which shows the variability in some models' ratings of certain teams.
-
Sparrow's accuracy looks a lot worse when you include the six shots at goal which haven't registered a score. Here are all Melbourne's players to have taken at least five shots this year, noting of course that raw accuracy doesn't take into account where players take their shots from nor the pressure they are under when they take the kick. This is where Champion Data's expected score can be useful but they don't make the data available for individual players. Here are the least accurate players in the AFL this year who have taken at least ten shots. Notes: For the purpose of comparison to xScore, Score excludes rushed behinds. xScore sourced from the Herald Sun / Champion Data via https://twitter.com/OliverGigacz/status/1528338101042806785?s=20&t=LvFmdS4tCVKdBsutoZc2pg
-
I agree with your points, although that West Coast v Carlton game was only played at the MCG due to the agreement that a match had to be played at the MCG each week of finals. West Coast was the designated home team in that match.
-
Yeah Melbourne certainly had their fate in their own hands in 2004 and should never have finished 5th. The paper goes into the home ground advantage as you would've seen but I guess the main thing it illustrates is the drop off from 4th to 5th. This is evident when looking at Preliminary Final appearances with 3rd and 4th making the PF 18 times compared to 5th and 6th three and four times respectively. This is of course what you want - the best four teams playing in the Prelims. As you have noted though, there has been a clear difference in finals results between 3rd and 4th. I have had a look at (since 2000) how far teams finishing in each ladder position made it in the finals and (in addition to not winning a Premiership) 4th has only made the Grand Final three times compared to 15, 12 and 11 for 1st, 2nd and 3rd respectively. Interestingly, 5th has a 12-10 record against 8th in Elimination Finals but 6th has a 14-8 record against 7th. The results for the top 4 based on the outcome of the Qualifying Finals are interesting. Teams in 3rd and 4th have each lost the QF 16 times and both ladder positions have made it to the Prelim 12 times. Teams in 4th after losing the QF have a 1-11 record in the PF whereas teams in 3rd after losing the QF have a 5-7 record in the PF and 4-1 record in Grand Finals. Your hypothesis of 4th having a tougher finals draw does make sense and is certainly backed by the numbers.
-
The main thing I liked was that the probability of winning the premiership was more in line with ladder position. Currently, the only advantage to finishing 1st over 4th or 5th over 8th is the home advantage and there's a big drop off from 4th to 5th. I think my feelings probably stem from 2004 when Melbourne lost to Essendon in the elimination final! 😂 I am not advocating going back to that system though. Source: https://researchbank.swinburne.edu.au/file/ca205430-48c0-4721-8dd7-b05b36f4df8b/1/PDF (Published version).pdf
-
Sorry, I did think that might be the case 😊 Median and mode would probably work for individual models, but it's harder with Squiggle given each model uses a different number of simulations. I'm always partial to mean 😉 Yeah I'm not sure I'm totally over Adelaide winning that premiership either, but last year certainly helped! I must admit, whilst the McIntyre final 8 system wasn't perfect, I did prefer certain aspects of it.
-
No it went to a top 6 in 1991 and top 8 in 1994. The AFL moved to the current final eight system in 2000 (from the McIntyre final eight system used from 1994-1999).
-
Well yes, I have them most likely to finish with 19 wins, followed by 18 then 20. On average 18.4 wins.
-
-
Yes and yes. Melbourne's biggest wins v West Coast 74 - 2022, Optus Stadium 70 - 2000, Subiaco 61 - 1987, MCG 61 - 2000, Colonial Stadium (Marvel Stadium) West Coast's lowest scores in Western Australia 5.8.38 v Melbourne - 2022, Optus Stadium 6.5.41 v Geelong - 2013, Subiaco 6.6.42 v Richmond - 2014, Subiaco 5.13.43 v Adelaide - 2013, Subiaco West Coast's lowest scores v Melbourne 5.8.38 - 2022, Optus Stadium 5.15.45 - 2008, MCG 9.7.61 - 1990, MCG 9.9.63 - 2021, Optus Stadium
-
Here are Melbourne's expected margin and win percentage for each game based on my model. Melbourne are favourites in all but the last game against Brisbane at this stage. The 3.6 losses is based on the win probability for each match - so a 50-50 game is counted as 0.5 wins and 0.5 losses. By the way, Melbourne's probability of winning all 22 games is 3.8%! 😁
-
Yeah I have them at 12.4 and 11.0. https://www.wheeloratings.com/afl_simulations.html There's definitely a gap opening up, but having said that, this time last year I had Richmond at 12.8 wins and a 68% chance of top 8 and GWS at 10.9 wins and a 38% chance of top 8.
-
Based on my simulations, 12 wins (& 0 draws) gives you about a 54% chance of making the top 8 and would likely then come down to percentage. 48 points and a percentage of 106 is roughly 50-50 to make the top 8.
-
In simple terms, the attack metric is based on whether teams score above or below average and the defence metric is based on whether teams concede above or below average. Instead of using a team's actual score, their score is calculated using a weighted average of their actual score and the score they would have kicked had they kicked at an expected accuracy. So a team that scores 9.22.76 (like Melbourne did against Richmond) will be credited with a much higher score than 76. I don't have Melbourne's adjusted score at hand, but the adjusted margin was 60 points instead of 22 which better reflects their dominance. The attack and defence metrics are updated following each match based on these "adjusted" scores compared to the expected scores. If a team's adjusted score is higher than expected, their attack rating increases (and vice versa). If their opponent's adjusted score is higher than expected, their defensive rating decreases (and vice versa). I have an unfinished page on my site which provides some more detail: https://www.wheeloratings.com/afl_methodology.html In addition, teams carry over ~65% of their rating from the previous year so Melbourne's overall rating dropped from 32.1 at the end of 2021 to 20.5 at the beginning of 2022. FYI, this is Melbourne's rating progression since round 22 last year which shows Melbourne's rating is almost back to the pre-Grand Final level. Let me know if you need any additional information!
-
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
I haven't omitted the NT games, they're included in the neutral state games. Maybe the AFL has taken it into account. On the face of it, it would appear that Melbourne hasn't really been negatively impacted by selling home games. I wasn't proposing a solution to the fixture, I was merely stating that ladder positions are not necessarily a true reflection of a team's quality over a season, given the inequalities in the fixture, and there isn't a linear relationship either. I was just acknowledging @Sydney_Demon comment that you can't simply look at ladder positions to determine overall difficulty of team's fixture. -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
Haha yes, I definitely don't want to tell the AFL! Their game in 2015 was cancelled after Phil Walsh's death: https://www.abc.net.au/news/2015-07-03/afl-phil-walsh-gillon-mclachlan-adelaide-crows-geelong/6593282 -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
I completely agree with the points you made here and in your other responses. The AFL cannot base the fixture on expected performance. In relation to the average ladder positions of opponents as a measure of fixture difficulty, I agree that it would be improved by looking at the comparison to the average if they played everyone. I also agree that playing 6 & 10 is not equivalent to playing 3 & 13. I didn't want to suggest that this was in any way an accurate measure of fixture difficulty. There are sophisticated models of "strength of schedule" which take into account the actual quality of opponent and, as importantly, where the match is played. Given the uneven fixture, ladder positions are definitely not an accurate measure of team strength. Also, there isn't a linear relationship between team quality and ladder position. This year, the difference between 16th and 17th might be equivalent to the difference between 10th and 16th. In relation to Melbourne benefitting in the past, we have played fewer away state games in our opponent's home state than any other team since 2008. Matches from 2008 to 2022 by location, excluding 2020 Away state matches in opponent's home state from 2008 to 2022 by state, excluding 2020 -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
It potentially increases the chances of a team "tanking" in round 17. Lose this game and play the last five games against teams below you, or win this game and play the last five games against the best five teams. I'd almost want to see a conference system in place for the last five rounds if that approach was adopted - e.g. the last five rounds are used to rank teams within each group of six teams, but the top 6 teams are set after round 17 and 7-12 can play off for the last two spots. Alternatively, each team's last five games could be played against teams around them on the ladder but without specifically grouping teams into blocks. E.g. Top plays teams 2-6, 2nd plays against five of teams 1-7, 3rd plays against five of teams 1-8, etc. It can be random but within certain constraints like teams 4-15 need to play at least two teams above them and two teams below them. Regardless, I agree that the draw/fixture could definitely be improved and Geelong should not be playing four games against the likely 17 & 18! -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
I guess the issue with that approach (i.e. top 6 playing each other, 7-12 and 13-18 likewise) is that 6th gets a really tough final five rounds and much tougher than the teams in the next group. If 6th is a game ahead of 9th after 17 rounds, they could easily miss finals due to the much tougher final five rounds. -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
For what it's worth, here are the average ladder positions for each team's opponents this year, based on (a) last year's ladder position and (b) this year's projected ladder positions from https://squiggle.com.au/ladder/ Of course, ladder position isn't an authoritative ranking of team strength as teams may finish higher because they have an easier fixture and lower because of a harder fixture. -
Unfair fixtures for Top 4 contenders
WheeloRatings replied to ElDiablo14's topic in Melbourne Demons
Here are the number of matches each team plays against each other based on last year's final ladder position. Geelong, Adelaide and Gold Coast are the three teams to play both West Coast and North Melbourne twice this year but I don't think the AFL can really base the fixture on how they think certain teams will perform.- 67 replies
-
- 10
-
Just an update on Richard Little's analysis. A lot of that analysis relied on access to the API behind the AFL app's "AR Tracker" which gave you access to the location of every possession, disposal, along with many other stats. It may be a coincidence (but probably not), but The West Australian ran an article last Friday morning with some of the data from the API and by Friday night the API was returning a 403 Forbidden response! It may not be possible to undertake that level of analysis any longer which is such a shame.