Subject Matter Knowledge

Effect Size d = 0.09  (Hattie's Rank = 125)

Hattie cited these 2 meta-analyses in Visible Learning:


Hattie averaged the 2 effect sizes of 0.06 and 0.12 to get his overall result of 0.09. However, as you will see with the detail of the studies below, it is not at all clear how Hattie derived 0.06 or 0.12. 

Hagemeister (2020) shows this is a consistent problem with Hattie's work,
"...no details of how Hattie calculated the effect sizes he ascribes to the studies listed... rendering fellow researchers unable to engage with and interrogate his findings on a scientific basis." (p. 4)
For example, Ahn & Choi detail that different results are derived depending on how Subject Matter Knowledge is measured. They published many different tables, e.g. from p. 25, they report a correlation r. Hattie usually converts r to his effect size Cohen's d. I've added Cohen's d to the table:


Note: There are huge issues with Hattie converting r to Cohen's d and then comparing this with Effect Sizes calculated in different ways - see correlation.

Ahn & Choi summarise,
"Thus, it may be that using different scales to measure teachers’ subject matter knowledge influence inconsistent study findings" (p. 9).
"the relationship between teachers’ subject matter knowledge and student learning differs depending on measurement variations. Thus we argue that educational researchers and policy makers should carefully examine different aspects of measurement such as scales, types of measures, and unit of analysis before making inferences from the findings." (p. 33)
Yet Hattie Regularly Makes Critical Inferences

In Hattie's 2008 Nuthall lecture he called Teacher Subject Knowledge a disaster!



Hattie continued to use these 'disaster' slides through to 2012:


Hattie regularly uses this type of polemic. At his inaugural address as Director for the powerful Australian Institute for Teaching and School Leadership (AITSL) in 2011,
"Teachers’ subject matter knowledge counts for zero...

The most bankrupt institution I know in this business is teacher education."
Another Conflict with Other Researchers and Teachers

Hattie's downgrading of Teacher Subject Matter Knowledge is at odds with other key research bodies, e.g.,

The OECD lists Teacher Content Knowledge as the one of the MOST important factors in improving student achievement. 

Also, the large UK based, Evidence Based Education, lists teacher's understanding of the content, as one of their top strategies.

Kirschner & Neelen (2023) detail the evidence for the importance of Subject Matter Knowledge & Pedagogical Knowledge and contradict Hattie,
"Subject knowledge is the knowledge of the content of the domain to be taught which teachers must master to teach at all. In chapter 16 of our book How Teaching Happens (2022), on Lee Shulman’s 1987 article ‘Knowledge and teaching: Foundations of the new reform’, Carl Hendrick, Jim Heal and I summarised this as ‘why you can’t teach what you don’t know’."
Hattie ranks 'Teacher Professional Development' high with an effect size d = 0.62 and this seems to contradict his low ranking of Subject Matter Knowledge. Although, 'Teacher Professional Development' would cover many aspects of teaching like behaviour management, much of it is also about improving Subject Matter Knowledge.

Another contradiction is Hattie's high ranking of 'Direct Instruction', presumably a teacher would need 'Subject Matter Knowledge' in order to Directly Instruct.

My own experience as a maths & science teacher suggests that it is ludicrous to suggest 'Subject Mater Knowledge' is not important.

So What Do the Studies that Hattie Cites Say?

Professor Tim Cain reviews the 2 meta-analyses that Hattie used here.

Cain was responding to a claim by the Evidence-Based Teachers Network (EBTN) who used Hattie's research, under the title, ‘Myths: Teacher Subject Knowledge’ it said, there is no evidence for teacher's to be experts in their subject!

Professor Cain said,

'Immediately, I was doubtful. Yes, some people are experts in their subject and hopeless at teaching. But surely, unless your ambition is merely to teach the exam specifications (and nothing but the exam specs), deeper subject knowledge is helpful, particularly for students who are actually interested in the subject? 
... Hattie analyses no fewer than 931 meta-analyses of education research so he’s clearly an authority on matters educational but, when I checked, only two of these were about teachers’ subject knowledge. The most recent (Ahn & Choi 2004) is about teacher knowledge in mathematics; the other (Druva & Anderson 1983) is about Teacher background in science. 
But hold – is ‘Teacher background in science’ the same thing as subject knowledge? I checked the abstract, it doesn’t list all the background factors but those which are mentioned are, ‘gender, course-work, IQ, etc.’. It doesn’t mention subject knowledge at all. 
In contrast, Ahn & Choi (2004) is about teacher’s subject knowledge but it’s hardly the most comprehensive review. Once you’ve stripped away the conference papers and dissertations, you’re left with only four papers that could reasonably claim to be peer-reviewed. Indeed, Ahn & Choi (2004) is a conference paper which doesn’t mean it’s poor but it hasn’t met the more rigorous criteria for a journal paper. Also, although you might expect the paper to be about whether teachers’ subject knowledge impacts on learning, it isn’t. This is what it says: 
"… the focus of this paper is not on the question of whether teachers’ subject matter knowledge influences student learning. Such an influence is assumed. It is a question of why research has drawn different relationships between teachers’ subject matter knowledge and student learning" (p. 2). 
I’m afraid that this points up some of the cracks in Hattie’s work – good though it is, he sometimes conflates matters which would be better kept distinct and he’s not always as rigorous as he might be about the quality of his meta-anayses.'
Ahn & Choi (2004)

Investigated why Teacher Subject Knowledge (SMK) studies produce such conflicting results. 

They detail the 2 major problems, which are the same problems for all of the influences in this blog. Namely, student achievement is measured in different ways and the definition of the influence in questions differs across studies.

A great example of the latter is their table (p. 25) which I displayed at the beginning of this page.

I think Hattie obtained his ES, d = 0.12 from their summary (p. 30),


A correlation of r = 0.06 when converted is an ES of d = 0.12.

However, Ahn & Choi report r = 0.11 when teacher tests were used, then when converted to an ES, d = 0.22. This is nearly double the ES reported by Hattie!

Ahn & Choi acknowledge this problem,
"The use of different indicators may act as an important moderator. Due to the intangible nature of knowledge, researchers need to create variables that are visible and practical in order to investigate relationships and different indicators of subject matter knowledge were used in the studies" (p. 4).
"Thus, it may be that using different scales to measure teachers’ subject matter knowledge influence inconsistent study findings" (p. 9).
Hattie's only comment about this study is (VL, 2009, p. 113) they,
"found a low effect size of d = 0.12 between knowing maths and student outcomes."
Hattie's comment hardly represents the study! The Evidence-Based Teachers Network (EBTN) then uses that result to proclaim this as a Myth!

Druva & Anderson (1983)

Hattie reports a low effect size of d = 0.06 from this study which must have been converted from r = 0.03 as the study only reports correlations - r.

Once again, Hattie gives no detail on how he got this value from the many correlations reported by the paper.

However, the same issues arise: how is student achievement measured and how is SMK defined. As Professor Cain says, the paper does not mention SMK but lists a range of other characteristics, e.g., from p. 472,





As stated it is difficult to find how Hattie got his overall figure of r = 0.03.

For example, if you average the achievement scores in the above table you get r = 0.31 which gives a high d = 0.65.

If you use High Complex Questions Knowledge r = 0.36 or number of education courses r = 0.37 you get d close to 0.80 and put this influence in Hattie's top 10 rankings!

As Myburgh (2016) in his analysis of Hattie says,
"No methodology of science is considered respectable in the research community when experiment design is unable to control a rampant blooming of subjectivity." (p. 10)
Evidence-Based Education in UK (EBE)

As pointed out above, Hattie's conclusion or "story", directly contradicts that of the large EBE organization's Great Teaching Toolkit, which lists "pedagogical content knowledge" as one of their major evidence-based themes.

Professor Coe kindly responded to one of my tweets on this contradiction,

Hattie Does Not Seem to Know the Details of Coe's Studies

In the book by Hattie & Larsen (2020), Larsen questions Hattie on this issue. However, Hattie's response (p. 16) is somewhat troubling, given Coe's explanation above,


So you've got 40 mins of a dept meeting to deliver some training, what do you do? 

Adam Robbins-


Full tweet - here.

Tim Cain's Article-

Does Teacher Subject Knowledge Matter? (Jan 22, 2015)


Recently, my eye was caught by something in the Newsletter from The Evidence Based Teachers Network (EBTN). Under the title, ‘Myths:  Teacher Subject Knowledge’ it said,
“If the teacher is an expert in their subject, they are bound to teach better, aren’t they?  Surely that’s why the government pays good graduates to join the profession (Teach First in UK).  Unfortunately there is no evidence for this.  Certainly you need to be familiar with the material being taught, but beyond that, there seems to be little benefit.  One reason may be that experts tend to over-estimate Prior Knowledge and hence baffle the students.  Once again, the answer seems to be: improve teaching skills.”
Immediately, I was doubtful. Yes, some people are experts in their subject and hopeless at teaching. But surely, unless your ambition is merely to teach the exam specifications (and nothing but the exam specs), deeper subject knowledge is helpful, particularly for students who are actually interested in the subject?
I checked the source with Mike Bell, who runs the EBTN website; he referred me to Hattie’s Visible Learning. In his latest (2012) book, Hattie analyses no fewer than 931 meta-analyses of education research so he’s clearly an authority on matters educational but, when I checked, only two of these were about teachers’ subject knowledge. The most recent (Ahn & Choi 2004) is about teacher knowledge in mathematics; the other (Druva & Anderson 1983) is about Teacher background in science.
But hold – is ‘Teacher background in science’ the same thing as subject knowledge? I checked the abstract (http://onlinelibrary.wiley.com/doi/10.1002/tea.3660200509/abstract). It doesn’t list all the background factors but those which are mentioned are, ‘gender, course-work, IQ, etc.’. It doesn’t mention subject knowledge at all.
In contrast, Ahn & Choi (2004) is about teacher’s subject knowledge (http://eric.ed.gov/?id=ED490006) but it’s hardly the most comprehensive review. Once you’ve stripped away the conference papers and dissertations, you’re left with only four papers that could reasonably claim to be peer-reviewed. Indeed, Ahn & Choi (2004) is a conference paper which doesn’t mean it’s poor but it hasn’t met the more rigorous criteria for a journal paper. Also, although you might expect the paper to be about whether teachers’ subject knowledge impacts on learning, it isn’t. This is what it says:
‘… the focus of this paper is not on the question of whether teachers’ subject matter knowledge influences student learning. Such an influence is assumed. It is a question of why research has drawn different relationships between teachers’ subject matter knowledge and student learning’ (p. 2).
I’m afraid that this points up some of the cracks in Hattie’s work – good though it is, he sometimes conflates matters which would be better kept distinct and he’s not always as rigorous as he might be about the quality of his meta-anayses.
So – is teachers’ subject knowledge a myth? I strongly suspect that we’ll never know for sure. The most scientifically rigorous research will use pre- and post-tests, measuring pre-specified student outcomes. These will likely to conclude that teachers’ subject knowledge makes no difference because the only thing that will be measured will be the content that is transmitted from teacher to students. Less scientifically rigorous research might investigate softer outcomes such as students’ interest in the subject, and dispositions to learn. It will likely conclude that teachers’ subject knowledge makes a difference, but then be criticized for a lack of rigour.
Of course, if the response to this issue is: “improve teaching skills”, I have no quibble. If, on the other hand, Headteachers use this finding to hire non-specialist teachers or to persuade teachers to teach outside the subjects they feel secure about, I’m a lot less sure.
Tim Cain 

No comments:

Post a Comment