Subject Matter Knowledge

Effect Size d = 0.09  (Hattie's Rank=125) but this seems to conflict with 'Teacher Professional Development' d = 0.62.

In Hattie's 2008 Nuthall lecture he called Teacher Subject Knowledge a disaster!



Professor Tim Cain reviews the 2 meta-analyses that Hattie used here.

Cain was responding to a claim by the Evidence-Based Teachers Network (EBTN) who used Hattie's research, under the title, ‘Myths: Teacher Subject Knowledge’ it said, there is no evidence for teacher's to be experts in their subject!

Professor Cain said,

'Immediately, I was doubtful. Yes, some people are experts in their subject and hopeless at teaching. But surely, unless your ambition is merely to teach the exam specifications (and nothing but the exam specs), deeper subject knowledge is helpful, particularly for students who are actually interested in the subject? 
... Hattie analyses no fewer than 931 meta-analyses of education research so he’s clearly an authority on matters educational but, when I checked, only two of these were about teachers’ subject knowledge. The most recent (Ahn & Choi 2004) is about teacher knowledge in mathematics; the other (Druva & Anderson 1983) is about Teacher background in science. 
But hold – is ‘Teacher background in science’ the same thing as subject knowledge? I checked the abstract, it doesn’t list all the background factors but those which are mentioned are, ‘gender, course-work, IQ, etc.’. It doesn’t mention subject knowledge at all. 
In contrast, Ahn & Choi (2004) is about teacher’s subject knowledge but it’s hardly the most comprehensive review. Once you’ve stripped away the conference papers and dissertations, you’re left with only four papers that could reasonably claim to be peer-reviewed. Indeed, Ahn & Choi (2004) is a conference paper which doesn’t mean it’s poor but it hasn’t met the more rigorous criteria for a journal paper. Also, although you might expect the paper to be about whether teachers’ subject knowledge impacts on learning, it isn’t. This is what it says: 
"… the focus of this paper is not on the question of whether teachers’ subject matter knowledge influences student learning. Such an influence is assumed. It is a question of why research has drawn different relationships between teachers’ subject matter knowledge and student learning" (p. 2). 
I’m afraid that this points up some of the cracks in Hattie’s work – good though it is, he sometimes conflates matters which would be better kept distinct and he’s not always as rigorous as he might be about the quality of his meta-anayses.'
I've read the 2 research papers and confirm Professor Cain's analysis.

The research used correlation studies not the random or fixed effect models as Hattie says in Chapter 2 of VL. Hattie then converts the correlation (r) into an effect size (d) - see here for problems with the conversion of correlation studies.

Ahn & Choi (2004) investigated why Teacher Subject Knowledge (SMK) studies produce such conflicting results. 

They detail the 2 major problems, which are the same problems for all of the influences in this blog. Namely, student achievement is measured in different ways and the definition of the influence in questions differs across studies.

A great example of the latter is their table (p25) which shows some different ways SMK is measured (I've modified the table and added the r correlation converted to an effect size - d),


SMK (all grades)rd
GPA-0.06-0.12
Coursework-0.01-0.02
Major/degree-0.06-0.12
Certification status0.481.09
Tests0.110.22
Combination0.140.28
Ave0.10.22

 Depending on how SMK is defined, totally different results are obtained. In addition, even within one of the categories, e.g., Coursework, each study defines this differently!

Ahn & Choi (2004) acknowledge this problem,
'The use of different indicators may act as an important moderator. Due to the intangible nature of knowledge, researchers need to create variables that are visible and practical in order to investigate relationships and different indicators of subject matter knowledge were used in the studies' (p4).
'Thus, it may be that using different scales to measure teachers’ subject matter knowledge influence inconsistent study findings' (p9).
Hattie's only comment about this study is (VL, 2009 p113) they,
'found a low effect size of d = 0.12 between knowing maths and student outcomes.'
Hattie's comment hardly represents the study! The Evidence-Based Teachers Network (EBTN) then use that result to proclaim this as a Myth!

Perhaps Hattie and the EBTN should read the conclusion of the authors (p33),
'the relationship between teachers’ subject matter knowledge and student learning differs depending on measurement variations. Thus we argue that educational researchers and policy makers should carefully examine different aspects of measurement such as scales, types of measures, and unit of analysis before making inferences from the findings.'
Hattie reports a low effect size of d = 0.06 which was converted from r = 0.03 from the second study that he used, Druva & Anderson (1983). Hattie gives no detail on how he got this value, from the many correlations reported by the paper.

However, the same issues arise: how is student achievement measured and how is SMK defined. As Professor Cain says, the paper does not mention SMK but lists a range of other characteristics, e.g., from p472,





As stated it is difficult to find how Hattie got his overall figure of r = 0.03.

For example, if you average the achievement scores in the above table you get r = 0.31 which gives a high d = 0.65.

If you use High Complex Questions Knowledge r = 0.36 or number of education courses r = 0.37 you get d close to 0.80 and puts this influence in Hattie's top 10 rankings!

As Myburgh (2016, p10) in his analysis of Hattie says,
'No methodology of science is considered respectable in the research community when experiment design is unable to control a rampant blooming of subjectivity.'

No comments:

Post a Comment