Effect Size

"If you torture the data long enough, it will confess." Ronald Coarse


The effect size statistic (d) is borrowed from the medical model and measures the effect of a “treatment”. For Hattie, a "treatment" is an influence that causes an effect on student achievement. The effect size (d) is equivalent to a 'Z-score' of a standard normal distribution. For example, an effect size of 1 means that the score of the average person in the experimental (treatment) group is 1 standard deviation above the average person in the control group (no treatment).

The medical model insists on random assignment of patients to a control or experimental group, as well as "double blindness". That is, neither the control or experimental group nor the staff know, who is getting the treatment. This is done to remove the effect of confounding variables. In addition, educational experiments need to control for the age of the students and the time period over which the study runs (see A Year's Progress).

Few of the studies that Hattie cites, use random allocation, double blindness or control for the age of students or the time over which the study runs. This casts significant doubt on the validity and reliability of his synthesis.

Hattie states that the Effect size (d) is calculated by either (p8) :

Method 1 – The Random model: [MEAN (treatment) – MEAN (control)]/ Standard Deviation

Method 2 – The Fixed model:  [MEAN (end of treatment) – MEAN (beginning of treatment)]/ SD

It is important to note that these are two VERY DIFFERENT ways to calculate the effect size. Hattie states, “the random model allows generalisations to the entire research domain whereas the fixed model allows an estimate” (p12).

The U.S. Department of Education state that the methodological standards for studies have achieved considerable professional consensus across education and other disciplines (p19).

The main points of those standards:
The intervention must be systematically manipulated by the researcher, not passively observed.
The dependent variable must be measured repeatedly over a series of assessment points and demonstrate high reliability.

Correlation studies DO NOT meet these requirements.

The U.S. Department of Education reinforces that Method 1 is the gold standard. Method 2 is accepted but with a number of caveats. They use the phrase quasi-experimental design, which compares outcomes for students, classrooms, or schools who had access to the intervention with those who did not but were similar on observable characteristics. In this design, the study MUST demonstrate baseline equivalence.

In other words, the students can be broken into a control and experimental group (without randomization), but the two groups must display equivalence at the beginning of the study. They go into great detail about this here. However, the rating of these types of studies is "Meets WWC Group Design Standards with Reservations."

So at BEST the studies used by Hattie would be classified by The U.S. Department of Education as "Meets WWC Group Design Standards with Reservations."

Problem 1. Hattie mostly uses correlation studies, not true or quasi-experiments:

Hattie admits if you mix the above two methods up you have significant problems interpreting your data, "combining or comparing the effects generated from the two models may differ solely because different models are used and not as a function of the topic of interest.“ He goes on to say that, that he mostly uses the method 2 -“the fixed model” (p12).

However, even though Hattie takes the time to explain the above two methods, and the issue if you mix them up, many of meta-analyses in VL do NOT use randomised control groups, as in method 1, nor before and after treatment means, as in method 2, but rather some form of correlation which is later morphed into an effect size!

In his updated version of VL 2012 (summary) he once again emphasises he mostly uses method 1 or 2 above. Again, he makes no mention of using the weaker methodology of correlation (p10).

Correlation studies do not satisfy The U.S. Department of Education's design or quality criterion.

Also, many of the scholars that Hattie cites comment on this problem:

DuPaul & Eckert (2012) - behaviour: 'randomised control trials are considered the scientific "gold standard" for evaluating treatment effects ... the lack of such studies in the school-based intervention literature is a significant concern' (p408).

Kelley & Camilli (2007)Teacher Training. Studies use different scales (not linearly related) for coding identical amounts of education. This limits confidence in the aggregation of the correlational evidence (p33).

Studies inherently involve comparisons of nonequivalent groups; often random assignment is not possible. But, inevitably, this creates some uncertainty in the validity of the comparison (p33).

The correlation analyses are inadequate as a method for drawing precise conclusions (p34).

Research should provide estimates of the effects via effect size rather than correlation (p33).

Breakspear (2014) states, "Too often policy makers fail to differentiate between correlation and causation" (p13).

Blatchford (2016) commenting on Hattie's class size research, "Essentially the problem is the familiar one of mistaking correlation for causality. We cannot conclude that a relationship between class size and academic performance means that one is causally related to the other" (p94).

This is a major weakness of Hattie's synthesis as correlation studies are not longitudinal, i.e., they don't measure a change in student achievement over time, nor compare with a control group. This makes correlation studies highly susceptible to confounding variables - see ice cream correlation below.

We are constantly warned that correlation does not imply causation! Yet, Hattie confesses: “Often I may have slipped and made or inferred causality” (p237).

Here's a funny example of inferring causation from correlation and another example from TEDx.

Correlation (r) is morphed into an effect size (d) by the formula:



For example, ice cream sales is highly correlated with drowning, r = 0.9+ this would give an absolutely MASSIVE d = 4.10. In the context of Hattie's book 'ice cream' would be the largest influence on 'drowning'. But this is obviously absurd! The issue here is the major confounding variable - heat. Most of the correlation studies that Hattie cites have major confounding variables like heat.

Note that a rather weak correlation r = 0.45 gets converted into a massive effect size d=1.00 (in Hattie's terms 2.5 year's of progress!).


I have not been able to find an acceptable rationale for converting correlation into effect size using this formula (Hattie certainly does not discuss the issue).

Problem 2. Student Achievement is measured in different ways:

The effect size should measure the change in student achievement, but achievement is measured in many different ways and often not at all. For example, one study measured IQ while another measured hyperactivity. So comparing these effect sizes is the classic  'apples versus oranges' problem.

Once again, many of the scholars that Hattie uses, comment on this problem;

DuPaul & Eckert (2012) "It is difficult to compare effect size estimates across research design types. Not only are effect size estimates calculated differently for each research design, but there appear to be differences in the types of outcome measures used across designs" (p408).

Kelley & Camilli (2007) "methodological variations across the studies make it problematic to draw coherent generalisations. These summaries illustrate the diversity in study characteristics including child samples, research designs, measurement, independent and dependent variables, and modes of analysis" (p7).

Dr Jonathan Becker in his critique of Marzano (but relevant for Hattie) states, "Marzano and his research team had a dependent variable problem. That is, there was no single, comparable measure of 'student achievement' (his stated outcome of interest) that they could use as a dependent variable across all participants.  I should note that they were forced into this problem by choosing a lazy research design [a meta-analysis].  A tighter, more focused design could have alleviated this problem."

Problem 3. Invalid beginning and end of treatments (Method 2):

Hattie re-interprets many meta-analyses that don't use a beginning/end of treatment methodology. The behaviour influences contain a lot of examples:

Reid, et al., (2004) compared the achievement of students labelled with 'emotional/behavioural' disturbance (EBD) with a 'normative' group. They used a range of measures to determine EBD, e.g., students who are currently in programs for severe behaviour problems e.g., psychiatric hospitals (p132).

The negative effect size indicates the EBD group performed well below the normative group. The authors conclude: "students with EBD performed at a significantly lower level than did students without those disabilities across academic subjects and settings" (p130).

Hattie interprets the EBD group as the end of the treatment group and the normative as the beginning of treatment group. Hattie concludes that decreasing disruptive behaviour, with d = -0.69, decreases achievement significantly. This was NOT the researcher's interpretation (p133).

Yet, when Hattie uses Frazier et al (2007) the control and experimental group are reversed. The ADHD group was the control group and the normative group was the experimental group. This then gives positive effect sizes (p51). Which Hattie then interprets as improving academic achievement!

Another example, using the influence of 'self-report' grades; Falchikov and Boud (1989) "Given that self-assessment studies are, in most cases, not “true” experiments and have no experimental or control groups, …, staff markers were designed as the control group and self-markers the experimental group“  (p416).

So in this instance, a large effect size means the students overestimate their ability compared to staff assessment. Not, as Hattie interprets, that self-assessment improves or influences your achievement.

Problem 4. Controlling of other variables:

Related to problem 1 - the research designers usually put a lot of thought into the controlling of other variables. Random assignments and double blindness are the major strategies used. Unfortunately, most of the studies Hattie cites, do not use these strategies. This introduces major confounding variables into the study. Class size is a good example, many studies compare the achievement of small versus large classes in schools, but many schools assign lower achieving students to smaller classes, they do not use random assignment.

Hattie rarely acknowledges this problem now, but in earlier work Hattie & Clifton (2004) Identifying Accomplished Teachers, they stated: "student test scores depend on multiple factors, many of which are out of the control of the teacher" (p320).

Another pertinent example is from Kulik and Kulik (1992) - see ability grouping:
Two different methods produced distinctly different results. Each of the 11 studies with same-age control groups showed greater achievement average effect size in these studies was 0.87.

However, if you use the (usually 1 year older) students as the control group, The average effect size in the 12 studies was 0.02. Hattie uses this figure in the category 'ability grouping for gifted students'.

Hattie does not include the d = 0.87. I think a strong argument can be made that the result d = 0.87 should be reported instead of the d = 0.02 as the accelerated students should be compared to the student group they came from (same age students) rather than the older group they are accelerating into.

In addition, a study may be measuring the combination of many influences. For example, using class size, how do you remove other influences from the study? For example, time on task, motivation, behaviour, teacher subject knowledge, feedback, home life, welfare, etc.

Hattie wavers on this major issue. In his commentary on 'within-class grouping' about Lou et al.(1996) Hattie does report some degree of additivity. "this analysis shows that the effect of grouping depends on class size. In large classes (more than 35 students) the mean effect of grouping is d = 0.35, whereas in small classes (less than 26 students) the mean effect is d = 0.22" (p94).

But in his summary, he states, "It is unlikely that many of the effects reported in this book are additive" (p256).

Problem 5. Sampling students from abnormal populations:

Sampling subjects from abnormal populations is a well-known issue for meta-analyses for a number of reasons: effect sizes are erroneously larger (due to a smaller standard deviation) and confounding variables are exacerbated. Using such samples makes it invalid to generalise influences to the broader student population.

Hattie ignores this issue and uses meta-analyses from abnormal student populations, e.g., ADHD, hyperactive, emotional/behavioural disturbed and English Second Language students. Also, he uses abnormal subjects from NON-student populations, e.g., doctors, tradesmen, nurses, athletes, sports teams and military groups.

Professor John O'Neill's AMAZING letter to the NZ Education Minister regarding major issues with Hattie's research. One of the issues he emphasises is Hattie's use of students from abnormal populations.

Problem 6. Use of the same data in different meta-analyses:

Kelley & Camilli (2007)Teacher Training. Many studies use the same data sets. To maintain statistical independence of the data, only one set of data points from each data set should be included in the meta-analysis (p25).

Hacke (2010) "Independence is the statistical assumption that groups, samples, or other studies in
the meta-analyses are unaffected by each other" (p83).

This is a major problem in Hattie's synthesis as many of the meta-analyses that Hattie averages use the same datasets - e.g., much of the same data is used in Teacher Training as is used in Teacher Subject Knowledge.

Problem 7. Different weightings applied to effect sizes:

Fixed Methods scholars recommend weighting (Pigott, 2010 p9). Larger studies are then weighted greater. If this were done this would effect all the reported effect sizes of Hattie and his rankings would totally change.

Professor Peter Blatchford also warns of this problem of studies of varying quality been given equal weighting,

"unfortunately many reviews and meta-analyses have given them equal weighting" (p15).

Another type of adjustment that researchers use is Hedges's 'g' which corrects for smaller sample sizes, e.g. Hacke (2010) (p77).

Problem 8. Quality of Studies:

The Encyclopedia of Measurement and Statistics outlines the problem of quality: " ... many experts agree that a useful research synthesis should be based on findings from high-quality studies with methodological rigour. Relaxed inclusion standards for studies in a meta-analysis may lead to a problem that Hans J. Eysenck in 1978 labelled as “garbage in, garbage out.”

Or in modern terms garbage in, gospel out.” Dr Gary Smith (2014) (p25)

Many of the researchers that Hattie uses warn about the quality of studies, e.g., Slavin (1990), "any measure of central tendency in a meta-analysis ... should be interpreted in light of the quality and consistency of the studies from which it was derived, not as a finding in its own right" (p477).

"best evidence synthesis” of any education policy should encourage decision makers to favour results from studies with high internal and external validity—that is, randomised field trials involving large numbers of students, schools, and districts."  Slavin (1986)

Newman (2004) repeats what many scholars comment, "it could also be argued that the important thing is how the effect size is derived. If the effect size is derived from a high quality randomised experiment then a difference of any size could be considered important" (p200).

Hacke (2010) states the research design can also be a major source of variance in studies (p56).

However, once again, Hattie ignores these issues and makes an astonishing caveat, there is, "... no reason to throw out studies automatically because of lower quality” (p11).

This has led to significant criticism of VL:


Emeritus Professor Ivan Snook, et al“Any meta-analysis that does not exclude poor or inadequate studies is misleading and potentially damaging” (p2).

Professor Ewald Terhart: "It is striking that Hattie does not supply the reader with exact information on the issue of the quality standards he uses when he has to decide whether a certain research study meta-analysis is integrated into his meta-meta-analysis or not. Usually, the authors of meta-analyses devote much energy and effort to discussing this problem because the value or persuasiveness of the results obtained are dependent on the strictness of the eligibility criteria" (p429).

Kelvin Smythe  "I keep stressing the research design and lack of control of variables as central to the problem of Hattie’s research ..." 

David Didau gives an excellent overview of Hattie's effect sizes, cleverly using the classic clip from the movie Spinal Tap, where Nigel tries to explain why his guitar amp goes up to 11.

Dr Neil Hooley, in his review of Hattie - talks about the complexity of classrooms and the difficulty of controlling variables, "Under these circumstances, the measure of effect size is highly dubious" (p44).

Neil Brown"My criticisms in the rest of the review relate to inappropriate averaging and comparison of effect sizes across quite different studies and interventions."

The USA Government Funded Study on Educational Effect Size Bench Marks -
"The usefulness of these empirical benchmarks depends on the degree to which they are drawn from high-quality studies and the degree to which they summarise effect sizes with regard to similar types of interventions, target populations, and outcome measures." 

and also defined the criterion for accepting a research study, i.e., the quality needed (P33):
  • Search for published and unpublished research dated 1995 or later.
  • Specialised groups such as special education students, etc. were not included.
  • Also, to ensure that the effect sizes extracted from these reports were relatively good indications of actual intervention effects, studies were restricted to those using random assignment designs (that is method 1) with practice-as-usual control groups and attrition rates no higher than 20%.

NOTE: using these criteria virtually NONE of the 800+ meta-analyses in VL would pass the quality test!

Professor Dylan William, who produced the seminal research , 'Inside the black box', reflects on his research and cautions (click here for full quote):

"it is only within the last few years that I have become aware of just how many problems there are. Many published studies on feedback, for example, are conducted by psychology professors, on their own students, in experimental sessions that last a single day. The generalizability of such studies to school classrooms is highly questionable.

In retrospect, therefore, it may well have been a mistake to use effect sizes in our booklet 'Inside the black box' to indicate the sorts of impact that formative assessment might have. 

I do still think that effect sizes are useful ... If the effect sizes are based on experiments of similar duration, on similar populations, using outcome measures that are similar in their sensitivity to the effects of teaching, then I think comparisons are reasonable. Otherwise, I think effect sizes are extremely difficult to interpret."

But Hattie uses Millions of students!


A large number of students used in the synthesis seems to excuse Hattie's from the usual validity and reliability requirements. For example, Kuncel (2005) has over 56,000 students and reports the highest effect size of d=3.10 but it does not measure what Hattie's says -  a self-report grade in the future; but rather, student honesty with regard to their GPA a year ago. So this meta-analysis is not a valid or reliable study for the influence of self-report grades. The 56,000 students is totally irrelevant. Note, many of the controversial influences have only 1 or 2 meta-analyses as evidence.

David Weston gives a good summary of issues with Effect Sizes:
2min - contradictory results of studies are lost by averaging
4min 30sec - Reports of studies are too simplified and detail lost
5min - What does effect size mean?
6min 15 sec - Hattie's use of effect size
7min - Issues with effect size
8min 40sec - problems with spread of scores (standard deviation)
9min 30sec - need to check details of Hattie's studies
10min 30sec - problem with Hattie's hinge point d=0.40 (see A Year's Progress)
16min 50secs - Prof Dylan William's seminal work - 'Inside the Black Box', is an example of research that has been oversimplified by Educationalists - e.g., 'writing objectives on the board' but other more important findings have been lost.
18min - Context is king

David Weston uses a great analogy of a chef with teaching (5min onwards).

A short video on the issues with Social Science Research