Hatties Defenses

In Hattie's three published defenses (20102015 & 2017), he never addressed the:

Specific examples of misrepresentation or the use of studies not measuring the influence in question.

Use of studies on non-school people, .e.g., doctors, tradesmen, military personnel, university students, etc. 

Use of general or specific populations of students such as those with learning disabilities, or of specific learning areas.

Many calculation errors (apart from the CLE).

Use of studies not measuring achievement but something else, e.g., behavior, engagement, IQ, etc.

Equal weight of studies involving from 4 to 4000 studies.

Major issues of range restriction and control groups which have been shown to significantly change effect size calculations.

Problem of the age of the students and the time over which studies ran.

And many, many more...


Prof's Snook, Clark, Harker, Anne-Marie O’Neill and John O’Neill respond to Hattie's 2010 defense in 'Critic and Conscience of Society: A Reply to John Hattie' (p97),
'In our view, John Hattie’s article has not satisfactorily addressed the concerns we raised about the use of meta-analyses to guide educational policy and practice.'
Prof Arne Kare Topphol responds to Hattie's defense,
'Hattie has now given a response to the criticism I made. What he writes in his comment makes me even more worried, rather than reassured.'
Darcy Moore posts,
'Hattie’s [2017] reply to Eacott’s paper does not even remotely grapple with the issues raised.'
Prof Eacott also responded to Hattie's defense,
'Disappointed that SLAM declined my offer to write a response to Hattie's reply to my paper. Dialogue & debate is not encouraged/supported.'
Prof Dylan Wiliam casts significant doubt on Hattie's entire model by arguing that the age of the students and the time over which each study runs is an important component contributing to the effect size. 

Supporting Prof Wiliam's contention is the massive data collected to construct the United States Department of Education effect size benchmarks. These show a huge variation in effect sizes from younger to older students. 

This demonstrates that age is a HUGE confounding variable or moderator since, in order to compare effect sizes, studies need to control for the age of the students and the time over which the study ran. Otherwise, differences in effect size can be due to the age of the students measured!

Given Hattie's conclusion in his 2015 defense (p8),
'The main message remains, be cautious, interpret in light of the evidence, search for moderators, take care in developing stories, welcome critique, ...'
I'm extremely surprised Hattie has not addressed the massive implication of this evidence to his work, all he says in his summary VL 2012 (p14),
'the effects for each year were greater in younger and lower in older grades ... we may need to expect more from younger grades (d > 0.60) than for older grades (d > 0.30).'
Hattie finally agrees (2015 defense, p3) with Prof Wiliam:
'Yes, the time over which any intervention is conducted can matter (we find that calculations over less than 10-12 weeks can be unstable, the time is too short to engender change, and you end up doing too much assessment relative to teaching). These are critical moderators of the overall effect-sizes and any use of hinge = 0.4 should, of course, take these into account.'
Yet Hattie DOES NOT take this into account, there has been no attempt to detail and report the time over which the studies ran nor the age group of the students in the question nor adjust his previous rankings or conclusions.

Professor Dylan Wiliam summarises, 
'the effect sizes proposed by Hattie are, at least in the context of schooling, just plain wrong. Anyone who thinks they can generate an effect size on student learning in secondary schools above 0.5 is talking nonsense.'
The U.S Education Dept benchmark effect sizes support Wiliam's contention.

No comments:

Post a Comment