Effect Size d= 0.15  (Hattie's Rank=120)

Two meta-analyses were used but the same issues arise, the use of correlation studies rather than true experiments, the mixing of achievement with other measures, e.g., withdrawal behaviour, substance use, motivation, satisfaction, etc., the influence of confounding variables, inappropriate averaging, and calculation errors.

1. Eby et al., Does Mentoring Matter (2007).

Eby lists over 60 different correlations, Hattie does not give the details of how he got his effect size of d = 0.16. It seems that he has averaged the correlations of ALL mentoring outcomes, which includes NON-performance outcomes (substance use, stress, motivation, interpersonal relations, etc).

The ONE correlation, with regard to ACADEMIC ACHIEVEMENT, is r = 0.19 which can be morphed into d = 0.39 which is more than TWICE the effect size that he reports! See table below from p262.

Professor Eby concludes: "academic mentoring has stronger associations with outcomes than does youth mentoring"  (p263).

Also, Eby reports several limitations: given the correlation nature of many of the studies, our findings do not provide unambiguous evidence that mentoring causes outcomes. Rather, this leads to further more controlled designs: control groups, random allocation, and controlling for TIME! (p265)

I contacted Professor Ebby, and she explained another confounding variable - the time students are mentored. In this study, she just considered whether a student was mentored on NOT. In later studies, (see below) she found "stronger effects when examining the amount of mentoring support provided."

2. Dubois et al., (2002). Effectiveness of Mentoring Programs for Youth.

Hattie reports an effect size d=0.13, but similar to the Eby study, a number of effect sizes are reported and Hattie does not detail which one he has used. Although it does seem he has picked a particular one (which he did not do in the study above) see table below (p176).

DuBois et al conclude: Inferences regarding the influence of different variables are tentative because of the correlational nature of the studies. Better controlled studies reveal higher effect sizes. (p191)

3. Eby et al., (2013) An Interdisciplinary Meta-Analysis of Potential Antecedents, Correlates, and Consequences of Protege Perceptions of Mentoring.

Eby et al., focus on the amount of time for mentoring and get an average correlation (for learning) of r = 0.26 when converted gives d = 0.54 (p458). Hattie does not cite this study.

Eby et al., state "there is substantial variability in the nature and quality of interactions among mentors ..." (p444). The following diagram shows the complexity and variability involved (p459):

Once again a single effect size number misrepresents the complexity of the influence. Eby et al., conclude:

"The results of this research must be viewed within the context of the limitations associated with the literature that comprised this review. The vast majority of effects reported were based on cross-sectional data (93%) [i.e., correlational studies] and none were based on data from experimental designs, precluding our ability to make causal inferences" (p463).