Jump to content
View in the app

A better way to browse. Learn more.

Demonland

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Featured Replies

18 hours ago, jnrmac said:

2017 CD rankings FWIW and where they finished on the ladder at H&A

  1. GWS (7)
  2. Sydney (6)
  3. West Coast (2)
  4. Hawthorn (4)
  5. Western Bulldogs (13)
  6. Adelaide (12)
  7. Port Adelaide (10)
  8. Collingwood (3)
  9. Geelong (8)
  10. St Kilda (16)
  11. Melbourne (5)
  12. Richmond (1)
  13. North Melbourne (9)
  14. Fremantle (14)
  15. Essendon (11)
  16. Gold Coast (17)
  17. Carlton (18)
  18. Brisbane (15)

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

Edited by jnrmac

 
2 hours ago, La Dee-vina Comedia said:

How unhygienic is it for Champion Data to be drinking our bathwater?

It seems to me that Champion Data's statements are based on raw stats processed by an algorithm. There must be assumptions incorporated into the algorithm and those assumptions are created by people. Hence, it is not entirely incorrect to refer to CD "analysing" the stats and providing its "opinion". Seems to me, though, that the moment organisations state that they've used an "algorithm" to do something, the belief given to whatever follows seems to increase by about 33%. (Note: that last figure is not created by an algorithm so please treat it with caution.)

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

1 hour ago, jnrmac said:

I am shocked to read this. According to 'Land we have the best defence over the last 2 years by a country mile.

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

 
1 hour ago, jnrmac said:

Interesting in the above are Adelaide and Richmond. The Crows were beset by injuries and a dodgy pre-season camp and finished 12th. CD suggest they are the second best team in 2019.

Richmond won the flag in 2017 and were ranked 12th for 2018 yet they finished 2018 on top of the ladder after the H&A.

Regardless of what measurements CD uses there are still a huge amount of variables including the draw - which I believe they don't use in this particular ranking.

CD are not evaluating "teams" they've ranking "lists" of players who have played at least 5 games over the past two years.

It doesn't take into account game-plans, etc.

A team is quite different to a list.

33 minutes ago, Watson11 said:

CD just uses data and they make no assumptions.  They use machine learning, specifically a convolutional neural network (https://en.wikipedia.org/wiki/Convolutional_neural_network).  Consider it a "brain" that has no pre-concieved assumptions but hundred of millions of ways to combine information.  They just train the network on the data they have (well over 20 years of comprehensive data), putting in the stats for each game and the result.  The limitation is the stats don't cover everything, as that is impossible to do, and they don't take into account things like injuries, the draw, player improvements etc.  They do measure a lot though.  The list is just a ranking based on this information and simply tells us in an unbiased way that we have a very very good list.  As Prodee says though the premier can come out of any of the top 10 teams on that list.

 

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Edited by ManDee
last sentence


21 minutes ago, ManDee said:

Seriously! What is worth more a goal or a mark a hit out or a hand pass? Someone at CD has made that decision, they have weighted all manner of stats based on assumptions. If it was based on pure data you may see something resembling the ladder, hey that's a good idea. Rate teams based on results.

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I think that injuries dictate a lot of this. Richmond (ranked 12th in 2017) has has a very good run with injuries and this has been reflected on the ladder over the past two seasons. GWS (ranked first in 2017) have had an awful run with injuries and haven't advanced as far as they would be capable of had they had a full list to pick from.

I think that if we have a good run with injuries, we do have very close to the best list in the competition. Things change drastically as soon as good players start to go down, though.

13 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 
58 minutes ago, Watson11 said:

Not sure who says we have had the best defence by a country mile.  But definitely have never agreed with your view that the problem has been that our key defenders are hopeless one on one defenders.

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

27 minutes ago, Watson11 said:

Nope that's wrong.  They just put all the data in and train the model.  The training of the model weights the stats.  Not a human.  Then they put the stats of each list in and put that into the model to weight the list.  It's just data.  As was made clear it can't account for everything and is imperfect, but there is no assumptions put in.

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

Edited by ManDee


Just now, La Dee-vina Comedia said:

I don't know what the bolded bit actually means. But doesn't someone have to decide which actions should be measured and recorded in the first place and how much to weight a handpass, kick or tackle? And who decides what qualifies an action to be a "1 percenter"? This is what I meant by referring to assumptions having to be made by people who build the model in the first place.

 

2 minutes ago, ManDee said:

So for each algorithm they (people) select the hyperparameter values with the best  cross validated score. And if that is not ideal they (people) fine tune for the next test. If data doesn't lie then surely interpretation can. 

 

Edit:- Who enters the data? Was it an effective kick/handpass or not? Tap to advantage or not, who decides?

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Just now, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

OK so why are they so bad at predictions?

17 minutes ago, Watson11 said:

 

You are correct in that humans at the moment record the stats (and need to decide what is a 1%er, tap to advantage, effective kick etc).  But no human decides how to weight the various stats.  When you train these systems, you start with all the stats as inputs, and you know the result the model needs to give for each game (Team A won by X points).  The system repeatably adjusts parameters (weights) until it gets the correct answer, and repeats that for every game in its database. What you end up with is a model that given a set of input stats will predict the result, and that is simply how the lists are ranked.  I

Thanks, that explanation is both interesting and helpful.

20 minutes ago, ManDee said:

OK so why are they so bad at predictions?

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

43 minutes ago, jnrmac said:

We were 18th in one on one defending in 2017. 14th I recall in 2018.

You can believe what you want.

One on one loss stats for 2018 are below showing the top 20 ranked key position defenders.

Name

2018

Career

Will Schofield

15.1%

19.9%

Jake Lever

15.4%

30.6%

Harry Taylor

16.7%

15.4%

Lachie Henderson

16.7%

22.8%

Sam Frost

17.6%

26.7%

James Frawley

18.4%

25.0%

Alex Keith

18.5%

32.8%

Daniel Tahlia

18.9%

20.5%

Heath Grundy

20.3%

22.6%

Steven May

21.4%

23.2%

Robbie Tarrant

21.7%

27.8%

Scott Thompson

23.5%

26.7%

Tom Jonas

23.6%

24.3%

Alex Rance

25.0%

21.3%

Phil Davis

25.3%

31.1%

Lynden Dunn

26.1%

22.4%

Oscar McDonald

26.7%

25.1%

David Astbury

27.5%

24.7%

Jeremy McGovern

27.6%

19.4%

Jake Carlisle

27.6%

24.6%

Michael Hurley

27.6%

29.1%


37 minutes ago, ManDee said:

OK so why are they so bad at predictions?

A champion team will beat a team of champions? ?

4 hours ago, ProDee said:

That's a sub editor's headline.  It's a journo's interpretation.

It's not a comment from CD.

yeah i know. this thread is discussing that article posted in the OP

14 minutes ago, DubDee said:

yeah i know. this thread is discussing that article posted in the OP

Why did you bring up the wording "team to beat" then ?  What was your query ?

Your inference was that it was CD's term.  If you knew it was a journo's interpretation I don't know why you'd make the post you did.  It doesn't make sense.

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

To Watson 11 re one on one stats.

Using Rance a a model for a highly rated defender, it seems that the lower the % the better over a career. But is Taylor better than Rance (questionable based on AA selection) or is it really only a measure of game plan and game style for each player n a team?

Is it better to never be outmarked or to prevail in ground contests?

It would be interesting to see Neville's stats as he is rarely beaten one on one and also my perception is that Lynden Dunn was also very solid in one on one but only at ground level.


3 minutes ago, tiers said:

To Watson 11.

Your description of the process ie working backwards after the fact to find a combination that matches the outcome, would imply that the calculation could produce a different result for each player and each team after each game. A post facto reality check that in 2017 we had a good list.

Unless the results were either aggregated or otherwise moderated over a series of games, how would that assist in predicting outcomes of future games or is that not the intention of the algorithm?

Yes after every game the model is updated with those stats and the result.  Because the model is based on many years of data, each game only changes it a small amount.  The player ratings change far more after each game, as they are based on only 2 years and weighted to the most recent. 

The intent of all of this data is simply to make CD lots of money, because the professional clubs pay big $ for it.  The professional clubs pay for it because it gives them unbiased insights into what the really important stats are that give teams an edge.  Watch Moneyball if you have not seen it.  It's where all this data stuff really started.  Clarkson was the first in the AFL to use data and built his 4x premiership list using it.

3 hours ago, Watson11 said:

Can't predict injuries, modified game plans, improvements in players or teams, poor form or loss of confidence.  So no one knows for sure what will happen in round 1 next year let alone the entire season.  Where they are good is when the siren goes at the end of our round 1 game, you could put the stats into one of these models and it would predict the winner with 99% accuracy.  We just can't predict what those stats will be before the game with any certainty. 

All this tells us is based on the data we have a very, very good list. 

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

17 hours ago, ManDee said:

I'm sorry Watson but if you gave me 2 stats for any game I could tell you the result with 100% accuracy.

The score for each team.

And surely if the stats are good they should be able to predict injuries, modified game plans and improvements in players. Extrapolating what you are saying it is only a matter of enough data.

Lies, damn lies and statistics!

 

Haha. Maybe you and other Luddites can package that up and sell it to the footy department. 

Who knows, maybe they are predicting improvements in players based on age and games played.  I wouldn’t know.  Big data and machine learning is being applied everywhere whether you think it works or not.  Champion data can never predict injuries, but big European and US teams are measuring every training session and game and have been applying big data and machine learning to non contact injury prevention for several years.  They don’t publish much for obvious reasons, but BarcelonaFC recently published 2014 data that showed they can predict 60% of non contact injuries and thus can prevent them.  I’m sure that has improved in the last 4 years.  They have huge budgets and are way ahead of the AFL.  Maybe this is also happening in the AFL.

Point that started all of this is despite your opinion and comments on the CD list rating they have no user bias in the analysis of the data at all.  It is just data and unbiased processing of it, with all of its limitations ie garbage in garbage out.  I personally think it is pretty good in, pretty good out.  It’s not perfect.

Time to move on.

 

?shhhhhhh

Keep the lid on please?

It would be a interesting exercise to go back and  analyse North Melbourne stats for the 1990's, I bet they would hardly be in the top 4 for most of that  decade, in CD rankings but who where in the top 4 for most of that decade. i think one of the most important stat is how many positions per goal.


Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

Featured Content

  • AFLW PREVIEW: Richmond

    Round four kicks off early Saturday afternoon at Casey Fields, as the mighty Narrm host the winless Richmond Tigers in the second week of Indigenous Round celebrations. With ideal footy conditions forecast—20 degrees, overcast skies, and a gentle breeze — expect a fast-paced contest. Narrm enters with momentum and a dangerous forward line, while Richmond is still searching for its first win. With key injuries on both sides and pride on the line, this clash promises plenty.

    • 5 replies
  • AFLW REPORT: Collingwood

    Expectations of a comfortable win for Narrm at Victoria Park quickly evaporated as the match turned into a tense nail-biter. After a confident start by the Demons, the Pies piled on pressure and forced red and blue supporters to hold their collective breath until after the final siren. In a frenetic, physical contest, it was Captain Kate’s clutch last quarter goal and a missed shot from Collingwood’s Grace Campbell after the siren which sealed a thrilling 4-point win. Finally, Narrm supporters could breathe easy.

    • 2 replies
  • CASEY: Williamstown

    The Casey Demons issued a strong statement to the remaining teams in the VFL race with a thumping 76-point victory in their Elimination Final against Williamstown. This was the sixth consecutive win for the Demons, who stormed into the finals from a long way back with scalps including two of the teams still in flag contention. Senior Coach Taylor Whitford would have been delighted with the manner in which his team opened its finals campaign with high impact after securing the lead early in the game when Jai Culley delivered a precise pass to a lead from Noah Yze, who scored his first of seven straight goals for the day. Yze kicked his second on the quarter time siren, by which time the Demons were already in control. The youngster repeated the dose in the second term as the Seagulls were reduced to mere

    • 0 replies
  • AFLW PREVIEW: Collingwood

    Narrm time isn’t a standard concept—it’s the time within the traditional lands of Narrm, the Woiwurrung name for Melbourne. Indigenous Round runs for rounds 3 and 4 and is a powerful platform to recognise the contributions of Aboriginal and Torres Strait Islander peoples in sport, community, and Australian culture. This week, suburban footy returns to the infamous Victoria Park as the mighty Narrm take on the Collingwood Magpies at 1:05pm Narrm time, Sunday 31 August. Come along if you can.

    • 9 replies
  • AFLW REPORT: St. Kilda

    The Dees demolished the Saints in a comprehensive 74-pointshellacking.  We filled our boots with percentage — now a whopping 520.7% — and sit atop the AFLW ladder. Melbourne’s game plan is on fire, and the competition is officially on notice.

    • 4 replies
  • REPORT: Collingwood

    It was yet another disappointing outcome in a disappointing year, with Melbourne missing the finals for the second consecutive season. Indeed, it wasn’t even close, as the Demons' tally of seven wins was less than half the number required to rank among the top eight teams in the competition. When the dust of the game settled and supporters reflected on Melbourne's  six-point defeat at the hands of close game specialists Collingwood, Max Gawn's words about his team’s unfulfilled potential rang true … well, almost. 

    • 1 reply

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.