Jump to content


Recommended Posts

Posted (edited)
18 hours ago, jnrmac said:

2017 CD rankings FWIW and where they finished on the ladder at H&A

  1. GWS (7)
  2. Sydney (6)
  3. West Coast (2)
  4. Hawthorn (4)
  5. Western Bulldogs (13)
  6. Adelaide (12)
  7. Port Adelaide (10)
  8. Collingwood (3)
  9. Geelong (8)
  10. St Kilda (16)
  11. Melbourne (5)
  12. Richmond (1)
  13. North Melbourne (9)
  14. Fremantle (14)
  15. Essendon (11)
  16. Gold Coast (17)
  17. Carlton (18)
  18. Brisbane (15)

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

Edited by jnrmac

Posted
2 hours ago, La Dee-vina Comedia said:

How unhygienic is it for Champion Data to be drinking our bathwater?

It seems to me that Champion Data's statements are based on raw stats processed by an algorithm. There must be assumptions incorporated into the algorithm and those assumptions are created by people. Hence, it is not entirely incorrect to refer to CD "analysing" the stats and providing its "opinion". Seems to me, though, that the moment organisations state that they've used an "algorithm" to do something, the belief given to whatever follows seems to increase by about 33%. (Note: that last figure is not created by an algorithm so please treat it with caution.)

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

  • Like 2

Posted
1 hour ago, jnrmac said:

I am shocked to read this. According to 'Land we have the best defence over the last 2 years by a country mile.

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

Posted
1 hour ago, jnrmac said:

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

CD are not evaluating "teams" they've ranking "lists" of players who have played at least 5 games over the past two years.

It doesn't take into account game-plans, etc.

A team is quite different to a list.

Posted (edited)
33 minutes ago, Watson11 said:

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Edited by ManDee
last sentence
  • Like 1

Posted
21 minutes ago, ManDee said:

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

Posted

I think that injuries dictate a lot of this. Richmond (ranked 12th in 2017) has has a very good run with injuries and this has been reflected on the ladder over the past two seasons. GWS (ranked first in 2017) have had an awful run with injuries and haven't advanced as far as they would be capable of had they had a full list to pick from.

I think that if we have a good run with injuries, we do have very close to the best list in the competition. Things change drastically as soon as good players start to go down, though.

Posted
13 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.


Posted
58 minutes ago, Watson11 said:

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

  • Like 1
Posted (edited)
27 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

Edited by ManDee

Posted
Just now, La Dee-vina Comedia said:

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 

2 minutes ago, ManDee said:

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

  • Like 2
  • Thanks 3
Posted
Just now, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

OK so why are they so bad at predictions?

Posted
17 minutes ago, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Thanks, that explanation is both interesting and helpful.

Posted
20 minutes ago, ManDee said:

OK so why are they so bad at predictions?

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

  • Like 1
Posted
43 minutes ago, jnrmac said:

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

One on one loss stats for 2018 are below showing the top 20 ranked key position defenders.

Name

2018

Career

Will Schofield

15.1%

19.9%

Jake Lever

15.4%

30.6%

Harry Taylor

16.7%

15.4%

Lachie Henderson

16.7%

22.8%

Sam Frost

17.6%

26.7%

James Frawley

18.4%

25.0%

Alex Keith

18.5%

32.8%

Daniel Tahlia

18.9%

20.5%

Heath Grundy

20.3%

22.6%

Steven May

21.4%

23.2%

Robbie Tarrant

21.7%

27.8%

Scott Thompson

23.5%

26.7%

Tom Jonas

23.6%

24.3%

Alex Rance

25.0%

21.3%

Phil Davis

25.3%

31.1%

Lynden Dunn

26.1%

22.4%

Oscar McDonald

26.7%

25.1%

David Astbury

27.5%

24.7%

Jeremy McGovern

27.6%

19.4%

Jake Carlisle

27.6%

24.6%

Michael Hurley

27.6%

29.1%

Posted
4 hours ago, ProDee said:

That's a sub editor's headline.  It's a journo's interpretation.

It's not a comment from CD.

yeah i know. this thread is discussing that article posted in the OP

Posted
14 minutes ago, DubDee said:

yeah i know. this thread is discussing that article posted in the OP

Why did you bring up the wording "team to beat" then ?  What was your query ?

Your inference was that it was CD's term.  If you knew it was a journo's interpretation I don't know why you'd make the post you did.  It doesn't make sense.


Posted

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Posted

To Watson 11 re one on one stats.

Using Rance a a model for a highly rated defender, it seems that the lower the % the better over a career. But is Taylor better than Rance (questionable based on AA selection) or is it really only a measure of game plan and game style for each player n a team?

Is it better to never be outmarked or to prevail in ground contests?

It would be interesting to see Neville's stats as he is rarely beaten one on one and also my perception is that Lynden Dunn was also very solid in one on one but only at ground level.

Posted
3 minutes ago, tiers said:

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Yes after every game the model is updated with those stats and the result.  Because the model is based on many years of data, each game only changes it a small amount.  The player ratings change far more after each game, as they are based on only 2 years and weighted to the most recent. 

The intent of all of this data is simply to make CD lots of money, because the professional clubs pay big $ for it.  The professional clubs pay for it because it gives them unbiased insights into what the really important stats are that give teams an edge.  Watch Moneyball if you have not seen it.  It's where all this data stuff really started.  Clarkson was the first in the AFL to use data and built his 4x premiership list using it.

  • Like 2
Posted
3 hours ago, Watson11 said:

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Posted
17 hours ago, ManDee said:

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Haha. Maybe you and other Luddites can package that up and sell it to the footy department. 

Who knows, maybe they are predicting improvements in players based on age and games played.  I wouldn’t know.  Big data and machine learning is being applied everywhere whether you think it works or not.  Champion data can never predict injuries, but big European and US teams are measuring every training session and game and have been applying big data and machine learning to non contact injury prevention for several years.  They don’t publish much for obvious reasons, but BarcelonaFC recently published 2014 data that showed they can predict 60% of non contact injuries and thus can prevent them.  I’m sure that has improved in the last 4 years.  They have huge budgets and are way ahead of the AFL.  Maybe this is also happening in the AFL.

Point that started all of this is despite your opinion and comments on the CD list rating they have no user bias in the analysis of the data at all.  It is just data and unbiased processing of it, with all of its limitations ie garbage in garbage out.  I personally think it is pretty good in, pretty good out.  It’s not perfect.

Time to move on.

  • Like 4
Posted

It would be a interesting exercise to go back and  analyse North Melbourne stats for the 1990's, I bet they would hardly be in the top 4 for most of that  decade, in CD rankings but who where in the top 4 for most of that decade. i think one of the most important stat is how many positions per goal.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Demonland Forums  

  • Match Previews, Reports & Articles  

    UP IN LIGHTS by Whispering Jack

    Those who watched the 2024 Marsh AFL National Championships closely this year would not be particularly surprised that Melbourne selected Victoria Country pair Harvey Langford and Xavier Lindsay on the first night of the AFL National Draft. The two left-footed midfielders are as different as chalk and cheese but they had similar impacts in their Coates Talent League teams and in the National Championships in 2024. Their interstate side was edged out at the very end of the tournament for tea

    Demonland
    Demonland |
    Special Features

    TRAINING: Wednesday 20th November 2024

    It’s a beautiful cool morning down at Gosch’s Paddock and I’ve arrived early to bring you my observations from today’s session. DEMONLAND'S PRESEASON TRAINING OBSERVATIONS Reigning Keith Bluey Truscott champion Jack Viney is the first one out on the track.  Jack’s wearing the red version of the new training guernsey which is the only version available for sale at the Demon Shop. TRAINING: Viney, Clarry, Lever, TMac, Rivers, Petty, McVee, Bowey, JVR, Hore, Tom Campbell (in tr

    Demonland
    Demonland |
    Training Reports

    TRAINING: Monday 18th November 2024

    Demonland Trackwatchers ventured down to Gosch's Paddock for the final week of training for the 1st to 4th Years until they are joined by the rest of the senior squad for Preseason Training Camp in Mansfield next week. WAYNE RUSSELL'S PRESEASON TRAINING OBSERVATIONS No Ollie, Chin, Riv today, but Rick & Spargs turned up and McDonald was there in casual attire. Seston, and Howes did a lot of boundary running, and Tom Campbell continued his work with individual trainer in non-MFC

    Demonland
    Demonland |
    Training Reports

    2024 Player Reviews: #11 Max Gawn

    Champion ruckman and brilliant leader, Max Gawn earned his seventh All-Australian team blazer and constantly held the team up on his shoulders in what was truly a difficult season for the Demons. Date of Birth: 30 December 1991 Height: 209cm Games MFC 2024: 21 Career Total: 224 Goals MFC 2024: 11 Career Total: 109 Brownlow Medal Votes: 13 Melbourne Football Club: 2nd Best & Fairest: 405 votes

    Demonland
    Demonland |
    Melbourne Demons 12

    2024 Player Reviews: #36 Kysaiah Pickett

    The Demons’ aggressive small forward who kicks goals and defends the Demons’ ball in the forward arc. When he’s on song, he’s unstoppable but he did blot his copybook with a three week suspension in the final round. Date of Birth: 2 June 2001 Height: 171cm Games MFC 2024: 21 Career Total: 106 Goals MFC 2024: 36 Career Total: 161 Brownlow Medal Votes: 3 Melbourne Football Club: 4th Best & Fairest: 369 votes

    Demonland
    Demonland |
    Melbourne Demons 5

    TRAINING: Friday 15th November 2024

    Demonland Trackwatchers took advantage of the beautiful sunshine to head down to Gosch's Paddock and witness the return of Clayton Oliver to club for his first session in the lead up to the 2025 season. DEMONLAND'S PRESEASON TRAINING OBSERVATIONS Clarry in the house!! Training: JVR, McVee, Windsor, Tholstrup, Woey, Brown, Petty, Adams, Chandler, Turner, Bowey, Seston, Kentfield, Laurie, Sparrow, Viney, Rivers, Jefferson, Hore, Howes, Verrall, AMW, Clarry Tom Campbell is here

    Demonland
    Demonland |
    Training Reports

    2024 Player Reviews: #7 Jack Viney

    The tough on baller won his second Keith 'Bluey' Truscott Trophy in a narrow battle with skipper Max Gawn and Alex Neal-Bullen and battled on manfully in the face of a number of injury niggles. Date of Birth: 13 April 1994 Height: 178cm Games MFC 2024: 23 Career Total: 219 Goals MFC 2024: 10 Career Total: 66 Brownlow Medal Votes: 8

    Demonland
    Demonland |
    Melbourne Demons 3

    TRAINING: Wednesday 13th November 2024

    A couple of Demonland Trackwatchers braved the rain and headed down to Gosch's paddock to bring you their observations from the second day of Preseason training for the 1st to 4th Year players. DITCHA'S PRESEASON TRAINING OBSERVATIONS I attended some of the training today. Richo spoke to me and said not to believe what is in the media, as we will good this year. Jefferson and Kentfield looked big and strong.  Petty was doing all the training. Adams looked like he was in rehab.  KE

    Demonland
    Demonland |
    Training Reports

    2024 Player Reviews: #15 Ed Langdon

    The Demon running machine came back with a vengeance after a leaner than usual year in 2023.  Date of Birth: 1 February 1996 Height: 182cm Games MFC 2024: 22 Career Total: 179 Goals MFC 2024: 9 Career Total: 76 Brownlow Medal Votes: 5 Melbourne Football Club: 5th Best & Fairest: 352 votes

    Demonland
    Demonland |
    Melbourne Demons 8
  • Tell a friend

    Love Demonland? Tell a friend!

×
×
  • Create New...