Concentration Persistence Engagement

Five meta-analyses were used to get d = 0.48 (Hattie's Rank = 49)

1. Feltz,  D., & Landers, D., (1983). The effects of mental practice on motor skill learning and performance. Journal of Sports Psychology, 5, 25-57
2. Datta, D., & Narayanan, V., (1989). A meta-analytic review of concentration-performance relationship. Journal of Management , 15(3), 469-483.
3. Kumar, D., (1991). A meta-analysis of the relationship between science instruction and student engagement. Educational Review, 43(1), 49-61.
4. Cooper, H., & Door, N., (1995). Race comparisons on need for achievement. Review of Educational Research, 65(4), 483-508.
5. Mikolashek, D., (2004). A meta-analysis of empirical research studies on resilience among students at risk for school failure. Unpublished Ed.D, Florida International University, FL.

Schulmeister & Loviscach (2014) Critical comments on the study "Making learning visible" (Visible Learning).

Look at these studies in detail and state that none operationalise concentration, persistence and engagement in terms of achievement (p. 126).

The Cooper & Dorr study is not relevant in this category as it compares differences in performance motivation between ethnic groups. The study serves to refute an earlier narrative work by Graham and, against the background of ethnic differences, looks for differences in age, school level, socioeconomic status, etc, (p. 126).

The Mikolashek's meta-analysis summarizes 28 resilience studies, whereby resilience was operationalized by test results and grades, but not through a targeted stress test. There is no reference to psychological studies that measure stress as a construct. Resilience is for Mikolashek the existence of performance requirements (p. 126).

Datta & Narayanan used the term "concentration" to describe the industrial economic concentration. It appears that neither Hattie nor his team has really read the study. The study does not belong in this category nor in the whole book! (p. 127)

'After studying the five meta-analyzes in this group one can become convinced ... that one should not have formed this group at all' (p. 127).
Dodiscimus (2014c) also looks at the studies in detail,
"Unfortunately, the meta-analyses referenced by Hattie don’t really tell us very much about the potential effect of increasing concentration, persistence, or engagement.

Kumar (1991) looked at the correlation between different teaching approaches (in science) and student engagement. Now student engagement might be a good thing but, as Hattie points out in his commentary "we should not make the mistake…of thinking that because students look engaged…they are…achieving."
Kumar has nothing to say about achievement in this meta-analysis. Also, although there was quite a big range of correlations (0.35 to 0.73) across the different teaching approaches, the probability of these differences being random is too high to claim statistical significance at a reasonable level – the perennial problem of typical sample sizes in education research.

Datta and Narayanan (1989) were looking at the relationship between concentration and performance, but in work settings; maybe that’s transferable, but maybe not.

Equally, Feltz and Landers (1983) were looking at the mental visualisation of motor tasks so, apart from subjects like PE, dance, and possibly D&T I cannot see the relevance to teaching.

Finally, Cooper and Dorr (1995) looked at whether there was a difference between ethnic groups, which again doesn’t tell us anything about how we might improve achievement, particularly since there was little difference found.

There is one more meta-analysis in the synthesis although it doesn’t feature in Hattie’s commentary; this is Mikolashek (2004). This was a meta-analysis of factors affecting the resilience – I think actually normal academic success as a proxy for resilience – of at-risk students. The abstract seems to suggest that internal and family factors are significant but, again, there is no measurement of the effect of anything a teacher might do to enhance these.

Looking at the overall picture here I think Hattie has pushed the envelope too far. One of the criticisms of meta-analysis is the danger of amalgamating studies that were actually looking at different things e.g. oral feedback, written feedback, peer feedback. I think it’s fine to lump all feedback together if measured by roughly the same outcome, provided this limitation is made clear. The next stage might be to unpick whether all forms of feedback are equally effective but unless it’s clear that one form is something like 0.20, another 0.60, and the third 1.00 (average Effect Size = 0.60) during the initial analysis, knowing that feedback is worth a more detailed look seems helpful.

However, for this influence, I think the ‘comparison of apples and oranges’ charge is justified criticism. The five meta-analyses are all looking at different things, in different contexts, and with several different outcome measures. I cannot see the value in averaging the effect sizes and am starting to wonder how much more of this I’m going to find as I continue to work through the book."

1 comment:

  1. As Kumar stated,"we should not make the mistake…of thinking that because students look engaged…they are…achieving." I think that possible evidence that a student is achieving would be that the student could engage as an individual to create some product as a result of processes such as concentration, engagement, organization and thinking.

    ReplyDelete