Jump to content

Recommended Posts

Posted (edited)
18 hours ago, jnrmac said:

2017 CD rankings FWIW and where they finished on the ladder at H&A

  1. GWS (7)
  2. Sydney (6)
  3. West Coast (2)
  4. Hawthorn (4)
  5. Western Bulldogs (13)
  6. Adelaide (12)
  7. Port Adelaide (10)
  8. Collingwood (3)
  9. Geelong (8)
  10. St Kilda (16)
  11. Melbourne (5)
  12. Richmond (1)
  13. North Melbourne (9)
  14. Fremantle (14)
  15. Essendon (11)
  16. Gold Coast (17)
  17. Carlton (18)
  18. Brisbane (15)

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

Edited by jnrmac

Posted
2 hours ago, La Dee-vina Comedia said:

How unhygienic is it for Champion Data to be drinking our bathwater?

It seems to me that Champion Data's statements are based on raw stats processed by an algorithm. There must be assumptions incorporated into the algorithm and those assumptions are created by people. Hence, it is not entirely incorrect to refer to CD "analysing" the stats and providing its "opinion". Seems to me, though, that the moment organisations state that they've used an "algorithm" to do something, the belief given to whatever follows seems to increase by about 33%. (Note: that last figure is not created by an algorithm so please treat it with caution.)

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

  • Like 2
Posted
1 hour ago, jnrmac said:

I am shocked to read this. According to 'Land we have the best defence over the last 2 years by a country mile.

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

Posted
1 hour ago, jnrmac said:

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

CD are not evaluating "teams" they've ranking "lists" of players who have played at least 5 games over the past two years.

It doesn't take into account game-plans, etc.

A team is quite different to a list.

Posted (edited)
33 minutes ago, Watson11 said:

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Edited by ManDee
last sentence
  • Like 1
Posted
21 minutes ago, ManDee said:

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

Posted

I think that injuries dictate a lot of this. Richmond (ranked 12th in 2017) has has a very good run with injuries and this has been reflected on the ladder over the past two seasons. GWS (ranked first in 2017) have had an awful run with injuries and haven't advanced as far as they would be capable of had they had a full list to pick from.

I think that if we have a good run with injuries, we do have very close to the best list in the competition. Things change drastically as soon as good players start to go down, though.

Posted
13 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

Posted
58 minutes ago, Watson11 said:

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

  • Like 1
Posted (edited)
27 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

Edited by ManDee
Posted
Just now, La Dee-vina Comedia said:

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 

2 minutes ago, ManDee said:

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

  • Like 2
  • Thanks 3
Posted
Just now, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

OK so why are they so bad at predictions?

Posted
17 minutes ago, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Thanks, that explanation is both interesting and helpful.

Posted
20 minutes ago, ManDee said:

OK so why are they so bad at predictions?

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

  • Like 1
Posted
43 minutes ago, jnrmac said:

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

One on one loss stats for 2018 are below showing the top 20 ranked key position defenders.

Name

2018

Career

Will Schofield

15.1%

19.9%

Jake Lever

15.4%

30.6%

Harry Taylor

16.7%

15.4%

Lachie Henderson

16.7%

22.8%

Sam Frost

17.6%

26.7%

James Frawley

18.4%

25.0%

Alex Keith

18.5%

32.8%

Daniel Tahlia

18.9%

20.5%

Heath Grundy

20.3%

22.6%

Steven May

21.4%

23.2%

Robbie Tarrant

21.7%

27.8%

Scott Thompson

23.5%

26.7%

Tom Jonas

23.6%

24.3%

Alex Rance

25.0%

21.3%

Phil Davis

25.3%

31.1%

Lynden Dunn

26.1%

22.4%

Oscar McDonald

26.7%

25.1%

David Astbury

27.5%

24.7%

Jeremy McGovern

27.6%

19.4%

Jake Carlisle

27.6%

24.6%

Michael Hurley

27.6%

29.1%

Posted
4 hours ago, ProDee said:

That's a sub editor's headline.  It's a journo's interpretation.

It's not a comment from CD.

yeah i know. this thread is discussing that article posted in the OP

Posted
14 minutes ago, DubDee said:

yeah i know. this thread is discussing that article posted in the OP

Why did you bring up the wording "team to beat" then ?  What was your query ?

Your inference was that it was CD's term.  If you knew it was a journo's interpretation I don't know why you'd make the post you did.  It doesn't make sense.

Posted

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Posted

To Watson 11 re one on one stats.

Using Rance a a model for a highly rated defender, it seems that the lower the % the better over a career. But is Taylor better than Rance (questionable based on AA selection) or is it really only a measure of game plan and game style for each player n a team?

Is it better to never be outmarked or to prevail in ground contests?

It would be interesting to see Neville's stats as he is rarely beaten one on one and also my perception is that Lynden Dunn was also very solid in one on one but only at ground level.

Posted
3 minutes ago, tiers said:

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Yes after every game the model is updated with those stats and the result.  Because the model is based on many years of data, each game only changes it a small amount.  The player ratings change far more after each game, as they are based on only 2 years and weighted to the most recent. 

The intent of all of this data is simply to make CD lots of money, because the professional clubs pay big $ for it.  The professional clubs pay for it because it gives them unbiased insights into what the really important stats are that give teams an edge.  Watch Moneyball if you have not seen it.  It's where all this data stuff really started.  Clarkson was the first in the AFL to use data and built his 4x premiership list using it.

  • Like 2
Posted
3 hours ago, Watson11 said:

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Posted
17 hours ago, ManDee said:

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Haha. Maybe you and other Luddites can package that up and sell it to the footy department. 

Who knows, maybe they are predicting improvements in players based on age and games played.  I wouldn’t know.  Big data and machine learning is being applied everywhere whether you think it works or not.  Champion data can never predict injuries, but big European and US teams are measuring every training session and game and have been applying big data and machine learning to non contact injury prevention for several years.  They don’t publish much for obvious reasons, but BarcelonaFC recently published 2014 data that showed they can predict 60% of non contact injuries and thus can prevent them.  I’m sure that has improved in the last 4 years.  They have huge budgets and are way ahead of the AFL.  Maybe this is also happening in the AFL.

Point that started all of this is despite your opinion and comments on the CD list rating they have no user bias in the analysis of the data at all.  It is just data and unbiased processing of it, with all of its limitations ie garbage in garbage out.  I personally think it is pretty good in, pretty good out.  It’s not perfect.

Time to move on.

  • Like 4
Posted

It would be a interesting exercise to go back and  analyse North Melbourne stats for the 1990's, I bet they would hardly be in the top 4 for most of that  decade, in CD rankings but who where in the top 4 for most of that decade. i think one of the most important stat is how many positions per goal.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Demonland Forums  

  • Match Previews, Reports & Articles  

    HIGHLIGHTS/LOWLIGHTS by Whispering Jack

    Melbourne traveled across the continent to take on the Fremantle Dockers in sweltering conditions at Mandurah south of Perth in a game that delivered the club both its highlight and its lowlight in the first minute.  But first, let’s start by doing away with the usual cliches used in connection with the game. It was just a practice match and the result didn’t matter. Bad kicking is bad football. The game was played in severe heat, the swirly breeze played havoc with both teams resulting in

    Demonland
    Demonland |
    Match Reports 1

    PODCAST: Practice Match vs Fremantle

    Join us LIVE on Monday night at 7:30pm as we break down the Practice Match against the Dockers. As always, your questions are a vital part of the show. If you would like to leave us a voicemail please call 03 9016 3666 and don't worry no body answers so you don't have to talk to a human. Listen & Chat LIVE: https://demonland.com/podcast Call: 03 9016 3666 Skype: Demonland31

    Demonland
    Demonland |
    Melbourne Demons 28

    PREGAME: Rd 01 vs GWS

    After 6 agonizingly long months the 2025 AFL Premiership Season is almost upon us. The Demons return to the MCG to take on the GWS Giants and will be hoping to get their year off to a flying start.  

    Demonland
    Demonland |
    Melbourne Demons 180

    POSTGAME: Practice Match vs Fremantle

    The Dees were blown out of the water early by the Fremantle Dockers before fighting back and going down by 19 points in their final practice match of the preseason before Round 1. Remember it's only a practice match if you lose.

    Demonland
    Demonland |
    Melbourne Demons 262

    GAMEDAY: Practice Match vs Fremantle

    It's Game Day and the Demons have hit the road for their first of 8 interstate trips this season when they take on the Fremantle Dockers in their final practice match before the start of their 2025 Premiership Campaign. GAME: Melbourne Demons vs Fremantle Dockers TIME: 6:10pm AEDT VENUE: Mandurah’s Rushton Park. TEAMS: MELBOURNE B Steven May Jake Lever Blake Howes HB Jake Bowey Trent Rivers Christian Salem C Ed Langdon Christian Petracca Jack Billings  HF Harr

    Demonland
    Demonland |
    Melbourne Demons 470

    TRAINING: Friday 28th February 2025

    A couple of Demonland Trackwatchers headed down to Gosch's Paddock to bring you their observations from today's training session before the Demons head off to Perth for their final Practice Match. KEV MARTIN'S PRESEASON TRAINING OBSERVATIONS Beautiful morning, not much wind, more than a couple of dozen spectators.  The players were up and about, boisterous and having fun. One of their last drills were three teams competing in a hard at it, handball game in a small area. Goody

    Demonland
    Demonland |
    Training Reports

    THE ACCIDENTAL DEMONS by The Oracle

    In the space of eight days, the Melbourne Football Club’s plans for the coming year were turned upside down by two season-ending injuries to players who were contending strongly for places in its opening round match against the GWS Giants. Shane McAdam was first player to go down with injury when he ruptured an Achilles tendon at Friday afternoon training, a week before the cut-off date for the AFL’s pre-season supplemental selection period (“SSP”). McAdam was beginning to get some real mom

    Demonland
    Demonland |
    Special Features

    PREGAME: Practice Match vs Fremantle

    The Demons hit the road for what will be their first of 8 interstate trips this year when they play their final practice match before the 2025 AFL Premiership Season against the Fremantle Dockers in Perth on Sunday, 2nd March @ 6:10pm (AEDT). 2025 AAMI Community Series Sun Mar 2 Fremantle v Melbourne, Rushton Oval, Mandurah, 3.10pm AWST (6.10pm AEDT)

    Demonland
    Demonland |
    Melbourne Demons 186

    RETURN TO NORMAL by Whispering Jack

    One of my prized possessions is a framed, autographed guernsey bearing the number 31 worn by my childhood hero, Melbourne’s champion six time premiership player Ronald Dale Barassi who passed away on 16 September 2023, aged 87. The former captain who went on to a successful coaching career, mainly with other clubs, came back to the fold in his later years as a staunch Demon supporter who often sat across the way from me in the Northern Stand of the MCG cheering on the team. Barassi died the

    Demonland
    Demonland |
    Match Reports
  • Tell a friend

    Love Demonland? Tell a friend!

×
×
  • Create New...