Jump to content


Recommended Posts

Posted (edited)
18 hours ago, jnrmac said:

2017 CD rankings FWIW and where they finished on the ladder at H&A

  1. GWS (7)
  2. Sydney (6)
  3. West Coast (2)
  4. Hawthorn (4)
  5. Western Bulldogs (13)
  6. Adelaide (12)
  7. Port Adelaide (10)
  8. Collingwood (3)
  9. Geelong (8)
  10. St Kilda (16)
  11. Melbourne (5)
  12. Richmond (1)
  13. North Melbourne (9)
  14. Fremantle (14)
  15. Essendon (11)
  16. Gold Coast (17)
  17. Carlton (18)
  18. Brisbane (15)

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

Edited by jnrmac

Posted
2 hours ago, La Dee-vina Comedia said:

How unhygienic is it for Champion Data to be drinking our bathwater?

It seems to me that Champion Data's statements are based on raw stats processed by an algorithm. There must be assumptions incorporated into the algorithm and those assumptions are created by people. Hence, it is not entirely incorrect to refer to CD "analysing" the stats and providing its "opinion". Seems to me, though, that the moment organisations state that they've used an "algorithm" to do something, the belief given to whatever follows seems to increase by about 33%. (Note: that last figure is not created by an algorithm so please treat it with caution.)

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

  • Like 2

Posted
1 hour ago, jnrmac said:

I am shocked to read this. According to 'Land we have the best defence over the last 2 years by a country mile.

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

Posted
1 hour ago, jnrmac said:

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

CD are not evaluating "teams" they've ranking "lists" of players who have played at least 5 games over the past two years.

It doesn't take into account game-plans, etc.

A team is quite different to a list.

Posted (edited)
33 minutes ago, Watson11 said:

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Edited by ManDee
last sentence
  • Like 1

Posted
21 minutes ago, ManDee said:

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

Posted

I think that injuries dictate a lot of this. Richmond (ranked 12th in 2017) has has a very good run with injuries and this has been reflected on the ladder over the past two seasons. GWS (ranked first in 2017) have had an awful run with injuries and haven't advanced as far as they would be capable of had they had a full list to pick from.

I think that if we have a good run with injuries, we do have very close to the best list in the competition. Things change drastically as soon as good players start to go down, though.

Posted
13 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.


Posted
58 minutes ago, Watson11 said:

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

  • Like 1
Posted (edited)
27 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

Edited by ManDee

Posted
Just now, La Dee-vina Comedia said:

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 

2 minutes ago, ManDee said:

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

  • Like 2
  • Thanks 3
Posted
Just now, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

OK so why are they so bad at predictions?

Posted
17 minutes ago, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Thanks, that explanation is both interesting and helpful.

Posted
20 minutes ago, ManDee said:

OK so why are they so bad at predictions?

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

  • Like 1
Posted
43 minutes ago, jnrmac said:

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

One on one loss stats for 2018 are below showing the top 20 ranked key position defenders.

Name

2018

Career

Will Schofield

15.1%

19.9%

Jake Lever

15.4%

30.6%

Harry Taylor

16.7%

15.4%

Lachie Henderson

16.7%

22.8%

Sam Frost

17.6%

26.7%

James Frawley

18.4%

25.0%

Alex Keith

18.5%

32.8%

Daniel Tahlia

18.9%

20.5%

Heath Grundy

20.3%

22.6%

Steven May

21.4%

23.2%

Robbie Tarrant

21.7%

27.8%

Scott Thompson

23.5%

26.7%

Tom Jonas

23.6%

24.3%

Alex Rance

25.0%

21.3%

Phil Davis

25.3%

31.1%

Lynden Dunn

26.1%

22.4%

Oscar McDonald

26.7%

25.1%

David Astbury

27.5%

24.7%

Jeremy McGovern

27.6%

19.4%

Jake Carlisle

27.6%

24.6%

Michael Hurley

27.6%

29.1%

Posted
4 hours ago, ProDee said:

That's a sub editor's headline.  It's a journo's interpretation.

It's not a comment from CD.

yeah i know. this thread is discussing that article posted in the OP

Posted
14 minutes ago, DubDee said:

yeah i know. this thread is discussing that article posted in the OP

Why did you bring up the wording "team to beat" then ?  What was your query ?

Your inference was that it was CD's term.  If you knew it was a journo's interpretation I don't know why you'd make the post you did.  It doesn't make sense.


Posted

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Posted

To Watson 11 re one on one stats.

Using Rance a a model for a highly rated defender, it seems that the lower the % the better over a career. But is Taylor better than Rance (questionable based on AA selection) or is it really only a measure of game plan and game style for each player n a team?

Is it better to never be outmarked or to prevail in ground contests?

It would be interesting to see Neville's stats as he is rarely beaten one on one and also my perception is that Lynden Dunn was also very solid in one on one but only at ground level.

Posted
3 minutes ago, tiers said:

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Yes after every game the model is updated with those stats and the result.  Because the model is based on many years of data, each game only changes it a small amount.  The player ratings change far more after each game, as they are based on only 2 years and weighted to the most recent. 

The intent of all of this data is simply to make CD lots of money, because the professional clubs pay big $ for it.  The professional clubs pay for it because it gives them unbiased insights into what the really important stats are that give teams an edge.  Watch Moneyball if you have not seen it.  It's where all this data stuff really started.  Clarkson was the first in the AFL to use data and built his 4x premiership list using it.

  • Like 2
Posted
3 hours ago, Watson11 said:

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Posted
17 hours ago, ManDee said:

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Haha. Maybe you and other Luddites can package that up and sell it to the footy department. 

Who knows, maybe they are predicting improvements in players based on age and games played.  I wouldn’t know.  Big data and machine learning is being applied everywhere whether you think it works or not.  Champion data can never predict injuries, but big European and US teams are measuring every training session and game and have been applying big data and machine learning to non contact injury prevention for several years.  They don’t publish much for obvious reasons, but BarcelonaFC recently published 2014 data that showed they can predict 60% of non contact injuries and thus can prevent them.  I’m sure that has improved in the last 4 years.  They have huge budgets and are way ahead of the AFL.  Maybe this is also happening in the AFL.

Point that started all of this is despite your opinion and comments on the CD list rating they have no user bias in the analysis of the data at all.  It is just data and unbiased processing of it, with all of its limitations ie garbage in garbage out.  I personally think it is pretty good in, pretty good out.  It’s not perfect.

Time to move on.

  • Like 4
Posted

It would be a interesting exercise to go back and  analyse North Melbourne stats for the 1990's, I bet they would hardly be in the top 4 for most of that  decade, in CD rankings but who where in the top 4 for most of that decade. i think one of the most important stat is how many positions per goal.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Demonland Forums  

  • Match Previews, Reports & Articles  

    TRAINING: Wednesday 18th December 2024

    It was the final session of 2024 before the Christmas/New Years break and the Demonland Trackwatchers were out in force to bring you the following preseason training observations from Wednesday's session at Gosch's Paddock. DEMONLAND'S PRESEASON TRAINING OBSERVATIONS TRAINING: Petracca, Oliver, Melksham, Woewodin, Langdon, Rivers, Billings, Sestan, Viney, Fullarton, Adams, Langford, Lever, Petty, Spargo, Fritsch, Bowey, Laurie, Kozzy, Mentha, George, May, Gawn, Turner Tholstrup, Kentfi

    Demonland
    Demonland |
    Training Reports

    TRAINING: Monday 16th December 2024

    Demonland Trackwatchers braved the sweltering heat to bring you their Preseason Training observations from Gosch's Paddock on Monday morning. SCOOP JUNIOR'S PRESEASON TRAINING OBSERVATIONS I went down today in what were pretty ordinary conditions - hot and windy. When I got there, they were doing repeat simulations of a stoppage on the wing and then moving the ball inside 50. There seemed to be an emphasis on handballing out of the stoppage, usually there were 3 or 4 handballs to

    Demonland
    Demonland |
    Training Reports 1

    TRAINING: Friday 13th December 2024

    With only a few sessions left before the Christmas break a number of Demonlander Trackwatchers headed down to Gosch's Paddock to bring you their observations from this morning's preseason training session. DEMONLAND'S PRESEASON TRAINING OBSERVATIONS PLAYERS IN ATTENDANCE: JVR, Salem, McVee, Petracca, Windsor, Viney, Lever, Spargo, Turner, Gawn, Tholstrup, Oliver, Billings, Langdon, Laurie, Bowey, Melksham, Langford, Lindsay, Jefferson, Howes, McAdam, Rivers, TMac, Adams, Hore, Verrall,

    Demonland
    Demonland |
    Training Reports

    TRAINING: Wednesday 11th December 2024

    A few new faces joined our veteran Demonland Trackwatchers on a beautiful morning out at Gosch's Paddock for another Preseason Training Session. BLWNBA'S PRESEASON TRAINING OBSERVATIONS I arrived at around 1015 and the squad was already out on the track. The rehab group consisted of XL, McAdam, Melksham, Spargo and Sestan. Lever was also on restricted duties and appeared to be in runners.  The main group was doing end-to-end transition work in a simulated match situation. Ball mov

    Demonland
    Demonland |
    Training Reports

    TRAINING: Monday 9th December 2024

    Once again Demonland Trackwatchers were in attendance at the first preseason training session for the week at Gosch's Paddock to bring you their observations. WAYNE WUSSELL'S PRESEASON TRAINING OBSERVATIONS Looks like very close to 100% attendance. Kelani is back. Same group in rehab. REHAB: Spargo, Lever, Lindsay, Brown & McAdam. Haven’t laid eyes on Fritsch or AMW yet. Fritsch sighted. One unknown mature standing with Goody. Noticing Nathan Bassett much m

    Demonland
    Demonland |
    Training Reports

    TRAINING: Friday 6th December 2024

    Some veteran Demonland Trackwatchers ventured down to Gosch's Paddock to bring you the following observations from another Preseason Training Session. WAYNE WUSSELL'S PRESEASON TRAINING OBSERVATIONS Rehab: Lever, Spargo, McAdam, Lindsay, Brown Sinnema is excellent by foot and has a decent vertical leap. Windsor is training with the Defenders. Windsor's run won't be lost playing off half back. In 19 games in 2024 he kicked 8 goals as a winger. I see him getting shots at g

    Demonland
    Demonland |
    Training Reports

    TRAINING: Wednesday 4th December 2024

    A couple of intrepid Demonland Trackwatchers headed down to Gosch's Paddock for the midweek Preseason Training Session to bring you the following observations. Demonland's own Whispering Jack was not in attendance but he kicked off proceedings with the following summary of all the Preseason Training action to date. We’re already a month into the MFC preseason (if you started counting when the younger players in the group began the campaign along with some of the more keen older heads)

    Demonland
    Demonland |
    Training Reports 2

    BEST OF THE REST by Meggs

    Meggs' Review of Melbourne's AFLW Season 9 ... Congratulations first off to the North Melbourne Kangaroos on winning the 2024 AFLW Premiership. Roos Coach Darren Crocker has assembled a team chock-full of competitive and highly skilful players who outclassed the Brisbane Lions in the Grand Final to remain undefeated throughout Season 9. A huge achievement in what was a dominant season by North. For Melbourne fans, the season was unfortunately one of frustration and disappointment

    Demonland
    Demonland |
    AFLW Melbourne Demons 3

    TRAINING: Monday 2nd December 2024

    There were many Demonland Trackwatchers braving the morning heat at Gosch's Paddock today to witness the players go through the annual 2km time trials. DEMONLAND'S PRESEASON TRAINING OBSERVATIONS Max, TMac & Melksham the first ones out on the track.  Runners are on. Guess they will be doing a lot of running.  TRAINING: Max, TMac, Melksham, Woey, Rivers, AMW, May, Sharp, Kolt, Adams, Sparrow, Jefferson, Billings, Petty, chandler, Howes, Lever, Kozzy, Mentha, Fullarton, Sal

    Demonland
    Demonland |
    Training Reports 1
  • Tell a friend

    Love Demonland? Tell a friend!

×
×
  • Create New...