Jump to content

Discussion on recent allegations about the use of illicit drugs in football is forbidden
  • IMPORTANT: PLEASE READ BEFORE POSTING

    Posting unsubstantiated rumours on this website is strictly forbidden.

    Demonland has made the difficult decision to not permit this platform to be used to discuss & debate the off-field issues relating to the Melbourne Football Club including matters currently being litigated between the Club & former Board members, board elections, the issue of illicit drugs in footy, the culture at the club & the personal issues & allegations against some of our players & officials ...

    We do not take these issues & this decision lightly & of course we believe that these serious matters affecting the club we love & are so passionate about are worthy of discussion & debate & I wish we could provide a place where these matters can be discussed in a civil & respectful manner.

    However these discussions unfortunately invariably devolve into areas that may be defamatory, libelous, spread unsubstantiated rumours & can effect the mental health of those involved. Even discussion & debate of known facts or media reports can lead to finger pointing, blame & personal attacks.

    The repercussion is that these discussions can open this website, it’s owners & it’s users to legal action & may result in this website being forced to shutdown.

    Our moderating team are all volunteers & cannot moderate the forum 24/7 & as a consequence problematic content that contravenes our rules & standards may go unnoticed for some time before it can be removed.

    We reserve the right to delete posts that offend against our above policy & indeed, to ban posters who are repeat offenders or who breach our code of conduct.

    WE HAVE BUILT A FANTASTIC ONLINE COMMUNITY AT DEMONLAND OVER THE PAST 23 YEARS & WE WOULD LIKE TO CONTINUE TO BE ABLE TO DISCUSS THE CLUB WE LOVE & ARE SO PASSIONATE ABOUT.

    Thank you for your continued support & understanding. Go Dees.


Demons are the team to beat in 2019: Champion Data


Demonland

Recommended Posts

18 hours ago, jnrmac said:

2017 CD rankings FWIW and where they finished on the ladder at H&A

  1. GWS (7)
  2. Sydney (6)
  3. West Coast (2)
  4. Hawthorn (4)
  5. Western Bulldogs (13)
  6. Adelaide (12)
  7. Port Adelaide (10)
  8. Collingwood (3)
  9. Geelong (8)
  10. St Kilda (16)
  11. Melbourne (5)
  12. Richmond (1)
  13. North Melbourne (9)
  14. Fremantle (14)
  15. Essendon (11)
  16. Gold Coast (17)
  17. Carlton (18)
  18. Brisbane (15)

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

Edited by jnrmac
Link to comment
Share on other sites

2 hours ago, La Dee-vina Comedia said:

How unhygienic is it for Champion Data to be drinking our bathwater?

It seems to me that Champion Data's statements are based on raw stats processed by an algorithm. There must be assumptions incorporated into the algorithm and those assumptions are created by people. Hence, it is not entirely incorrect to refer to CD "analysing" the stats and providing its "opinion". Seems to me, though, that the moment organisations state that they've used an "algorithm" to do something, the belief given to whatever follows seems to increase by about 33%. (Note: that last figure is not created by an algorithm so please treat it with caution.)

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, jnrmac said:

I am shocked to read this. According to 'Land we have the best defence over the last 2 years by a country mile.

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

Link to comment
Share on other sites

1 hour ago, jnrmac said:

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

CD are not evaluating "teams" they've ranking "lists" of players who have played at least 5 games over the past two years.

It doesn't take into account game-plans, etc.

A team is quite different to a list.

Link to comment
Share on other sites

33 minutes ago, Watson11 said:

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Edited by ManDee
last sentence
  • Like 1
Link to comment
Share on other sites

21 minutes ago, ManDee said:

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

Link to comment
Share on other sites

I think that injuries dictate a lot of this. Richmond (ranked 12th in 2017) has has a very good run with injuries and this has been reflected on the ladder over the past two seasons. GWS (ranked first in 2017) have had an awful run with injuries and haven't advanced as far as they would be capable of had they had a full list to pick from.

I think that if we have a good run with injuries, we do have very close to the best list in the competition. Things change drastically as soon as good players start to go down, though.

Link to comment
Share on other sites

13 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

Link to comment
Share on other sites


58 minutes ago, Watson11 said:

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

  • Like 1
Link to comment
Share on other sites

27 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

Edited by ManDee
Link to comment
Share on other sites

Just now, La Dee-vina Comedia said:

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 

2 minutes ago, ManDee said:

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

  • Like 2
  • Thanks 3
Link to comment
Share on other sites

Just now, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

OK so why are they so bad at predictions?

Link to comment
Share on other sites

17 minutes ago, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Thanks, that explanation is both interesting and helpful.

Link to comment
Share on other sites

20 minutes ago, ManDee said:

OK so why are they so bad at predictions?

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

  • Like 1
Link to comment
Share on other sites

43 minutes ago, jnrmac said:

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

One on one loss stats for 2018 are below showing the top 20 ranked key position defenders.

Name

2018

Career

Will Schofield

15.1%

19.9%

Jake Lever

15.4%

30.6%

Harry Taylor

16.7%

15.4%

Lachie Henderson

16.7%

22.8%

Sam Frost

17.6%

26.7%

James Frawley

18.4%

25.0%

Alex Keith

18.5%

32.8%

Daniel Tahlia

18.9%

20.5%

Heath Grundy

20.3%

22.6%

Steven May

21.4%

23.2%

Robbie Tarrant

21.7%

27.8%

Scott Thompson

23.5%

26.7%

Tom Jonas

23.6%

24.3%

Alex Rance

25.0%

21.3%

Phil Davis

25.3%

31.1%

Lynden Dunn

26.1%

22.4%

Oscar McDonald

26.7%

25.1%

David Astbury

27.5%

24.7%

Jeremy McGovern

27.6%

19.4%

Jake Carlisle

27.6%

24.6%

Michael Hurley

27.6%

29.1%

Link to comment
Share on other sites

14 minutes ago, DubDee said:

yeah i know. this thread is discussing that article posted in the OP

Why did you bring up the wording "team to beat" then ?  What was your query ?

Your inference was that it was CD's term.  If you knew it was a journo's interpretation I don't know why you'd make the post you did.  It doesn't make sense.

Link to comment
Share on other sites


To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Link to comment
Share on other sites

To Watson 11 re one on one stats.

Using Rance a a model for a highly rated defender, it seems that the lower the % the better over a career. But is Taylor better than Rance (questionable based on AA selection) or is it really only a measure of game plan and game style for each player n a team?

Is it better to never be outmarked or to prevail in ground contests?

It would be interesting to see Neville's stats as he is rarely beaten one on one and also my perception is that Lynden Dunn was also very solid in one on one but only at ground level.

Link to comment
Share on other sites

3 minutes ago, tiers said:

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Yes after every game the model is updated with those stats and the result.  Because the model is based on many years of data, each game only changes it a small amount.  The player ratings change far more after each game, as they are based on only 2 years and weighted to the most recent. 

The intent of all of this data is simply to make CD lots of money, because the professional clubs pay big $ for it.  The professional clubs pay for it because it gives them unbiased insights into what the really important stats are that give teams an edge.  Watch Moneyball if you have not seen it.  It's where all this data stuff really started.  Clarkson was the first in the AFL to use data and built his 4x premiership list using it.

  • Like 2
Link to comment
Share on other sites

3 hours ago, Watson11 said:

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Link to comment
Share on other sites

17 hours ago, ManDee said:

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Haha. Maybe you and other Luddites can package that up and sell it to the footy department. 

Who knows, maybe they are predicting improvements in players based on age and games played.  I wouldn’t know.  Big data and machine learning is being applied everywhere whether you think it works or not.  Champion data can never predict injuries, but big European and US teams are measuring every training session and game and have been applying big data and machine learning to non contact injury prevention for several years.  They don’t publish much for obvious reasons, but BarcelonaFC recently published 2014 data that showed they can predict 60% of non contact injuries and thus can prevent them.  I’m sure that has improved in the last 4 years.  They have huge budgets and are way ahead of the AFL.  Maybe this is also happening in the AFL.

Point that started all of this is despite your opinion and comments on the CD list rating they have no user bias in the analysis of the data at all.  It is just data and unbiased processing of it, with all of its limitations ie garbage in garbage out.  I personally think it is pretty good in, pretty good out.  It’s not perfect.

Time to move on.

  • Like 4
Link to comment
Share on other sites

It would be a interesting exercise to go back and  analyse North Melbourne stats for the 1990's, I bet they would hardly be in the top 4 for most of that  decade, in CD rankings but who where in the top 4 for most of that decade. i think one of the most important stat is how many positions per goal.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Demonland Forums  

  • Match Previews, Reports & Articles  

    DISCO INFERNO by Whispering Jack

    Two weeks ago, when the curtain came down on Melbourne’s game against the Brisbane Lions, the team trudged off the MCG looking tired and despondent at the end of a tough run of games played in quick succession. In the days that followed, the fans wanted answers about their team’s lamentable performance that night and foremost among their concerns was whether the loss was a one off result of fatigue or was it due to other factor(s) of far greater consequence.  As it turns out, the answer to

    Demonland
    Demonland |
    Melbourne Demons 16

    TIGERS PUNT CASEY by KC from Casey

    The afternoon atmosphere at the Swinburne Centre was somewhat surreal as the game between Richmond VFL and the Casey Demons unfolded on what was really a normal work day for most Melburnians. The Yarra Park precinct marched to the rhythm of city life, the trains rolled by, pedestrians walked by with their dogs and the traffic on Punt Road and Brunton Avenue swirled past while inside the arena, a football battle ensued. And what a battle it was? The Tigers came in with a record of two wins f

    Demonland
    Demonland |
    Casey Articles

    PREGAME: Rd 08 vs Geelong

    After returning to the winners list the Demons have a 10 day break until they face the unbeaten Cats at the MCG on Saturday Night. Who comes in and who goes out for this crucial match?

    Demonland
    Demonland |
    Melbourne Demons 179

    PODCAST: Rd 07 vs Richmond

    The Demonland Podcast will air LIVE on Monday, 29th April @ 8:30pm. Join George, Binman & I as we analyse the Demons victory at the MCG against the Tigers in the Round 07. You questions and comments are a huge part of our podcast so please post anything you want to ask or say below and we'll give you a shout out on the show. If you would like to leave us a voicemail please call 03 9016 3666 and don't worry no body answers so you don't have to talk to a human. Listen & Chat

    Demonland
    Demonland |
    Melbourne Demons 19

    VOTES: Rd 07 vs Richmond

    Last week Captain Max Gawn overtook reigning champion Christian Petracca in the Demonland Player of the Year Award. Steven May, Jack Viney & Alex Neal-Bullen make up the Top 5. Your votes for the win against the Tigers. 6, 5, 4, 3, 2, 1.

    Demonland
    Demonland |
    Melbourne Demons 54

    POSTGAME: Rd 07 vs Richmond

    The Demons put their foot down after half time to notch up a clinical win by 43 points over the Tigers at the MCG on ANZAC Eve keeping touch with the Top 4.

    Demonland
    Demonland |
    Melbourne Demons 383

    GAMEDAY: Rd 07 vs Richmond

    It's Game Day and the Demons once again open the round of football with their annual clash against Richmond on ANZAC Eve. The Tigers, coached by former Dees champion and Premiership assistant coach Adem Yze have a plethora of stars missing due to injury but beware the wounded Tiger. The Dees will have to be switched on tonight. A win will keep them in the hunt for the Top 4 whilst a loss could see them fall out of the 8 for the first time since 2020.

    Demonland
    Demonland |
    Melbourne Demons 683

    TRAINING: Tuesday 23rd April 2024

    Demonland Trackwatcher Kev Martin ventured down to Gosch's Paddock to bring you his observations from this morning's Captain's Run including some hints at the changes for our ANZAC Eve clash against the Tigers. Sunny, though a touch windy, this morning, 23 of them no emergencies.  Forwards out first. Harrison Petty, JvR, Jack Billings, Kade Chandler, Kozzy, Bayley Fritsch, and coach Stafford.  The backs join them, Steven May, Jake Lever, Woey, Judd McVee, Blake Howes, Tom McDonald

    Demonland
    Demonland |
    Training Reports

    OOZEE by The Oracle

    There’s a touch of irony in the fact that Adem Yze played his first game for Melbourne in Round 13, 1995 against the club he now coaches. For that game, he wore the number 44 guernsey and got six touches in a game the team won by 11 points.  The man whose first name was often misspelled, soon changed to the number 13 and it turned out lucky for him. He became a highly revered Demon with a record of 271 games during which his presence was acknowledged by the fans with the chant of “Oozee” wh

    Demonland
    Demonland |
    Match Previews 3
  • Tell a friend

    Love Demonland? Tell a friend!

×
×
  • Create New...