Problem Based Learning

Effect Size d = 0.15  (Hattie's Rank = 118)

At the 2005 ACER Conference - Hattie's Lecture here and Slides here, he called Problem Based Learning a disaster!



Then again in his 2008 Nuthall lecture.


Hattie continued to use these 'disaster' slides in his presentations to 2012 - e.g. in New Zealand.

Hattie's, keynote address to AITSL, November 2011,
"We have a whole rhetoric about discovery learning, constructivism, about learning styles that has got zero evidence for them anywhere."
Yet, in Visible Learning (2008), he referenced 7 meta-analyses on the constructivist paradigm (problem & inquiry based learning & teaching), that had an effect size of 0.40 or higher. Using Hattie's own definition, these are within the 'Zone of Desired Effects'. Also, one meta-analysis had an effect size of 1.13, which once again using Hattie's own definition, is 'High Impact'. 

Since 2008, he has added around 20 meta-analyses, many published before his 2011 keynote, 9 of which, have effect sizes around 0.80. Which, again in Hattie's terms are 'High Impact' strategies. 

So, why did Hattie say , "...zero evidence for them anywhere" when his own references reveal significant evidence?

Click here to read 5 Meta-analyses that Hattie referenced with huge effect sizes close to 1.

List of Meta-analyses that Hattie References (yellow highlighted added after 2008 book)

AuthorYearVariable - Inquiry Based TeachingEffect Size
Sweitzer & Anderson1983Inquiry teaching in science0.44
Shymansky, Hedges & Woodworth1990Inquiry methods in science0.27
Bangert-Drowns1992Inquiry teaching effects on critical thinking0.37
Smith1996Inquiry method in science0.17
Estrella, Au, Jaeggi, & Collins2018Inquiry on ELL science0.28
Furtak, Seidel, Iverson & Briggs2012Inquiry method in science0.5
Lazonder & Harmsen2016Inquiry learning0.62
Aktamis, Hidge, & Ozden2016Inquiry learning in science1.03
AuthorYearVariable - Prob Based LearningEffect Size
Albanese & Mitchell1993PBL in medicine0.27
Vernon & Blake1993PBL in college level-0.18
Walker & Leary2009PBL0.13
Haas2005Teaching methods in algebra0.52
Smith2003PBL in medicine0.31
Gijbels, Dochy, Van den Bossche & Segers2005PBL on assessment outcomes0.32
Dochy, Segers, Van den Bossche & Gijbels2003PBL on knowledge and skills0.12
Newman2004PBL in medicine-0.3
Walker2009PBL across disciplines0.13
Jensen2015PBL0.59
Kalaian & Kasim2017PBL0.29
Leary, Walker, Shelton & Fitt2013PBL0.24
Jiang, Monteau, & Zhao2013PBL0.32
Li, Zhou, & Wang2016PBL in Chinese clinical training0.06
Liao & Chang2016Problem posing on achievement0.57
Schmidt, Van der Molen, Te Winkel, & Wijnen2009Constructivist problem based learning on medical knowledge-0.18
Sayyah, Shirbandi & Rahim2017PBL0.8
Rosli, Capraro, & Capraro2014PBL0.88
Ayaz & Soylemez2015PBL in Turkey1
Dagyar & Demirel2015PBL0.76
Chen & Yang2019PBL0.71
AuthorYearVariable - Prob Based TeachingEffect Size
Tocanis, Ferguson-Hessler, & Broekkamp2001Problem solving in science0.59
Marcucci1980Problem solving in math0.35
Curbello1984Problem solving on science and math0.54
Almeida & Denham1984Interpersonal problem solving0.72
Mellinger1991Increasing cognitive flexibility1.13
Hembree1992Problem solving instructional methods0.33
Liao & Chen2018Problem solving teaching0.57
Marzano, Gaddy, & Dean2000Problem solving0.54
Johnson & Johnson2009Conflict based teaching0.8
Swanson2001Programs to enhance problem solving0.82
Xin & Jitendra1999Word problem solving in reading0.89
Zheng, Flynn & Swanson2011Problem solving with math disabilities0.78

Other research showing Problem/Project based learning improves student outcomes - Project-Based Learning Research Review.

Hattie's Recent Twist:

Hattie now cleverly incorporates Problem Based Learning into his model of learning. He now heavily promotes the Jig-Saw method, justifying this approach in his 3 level model of learning - Surface, then Deep then Transfer. 

He now claims Deep Learning is best developed by the Jig-Saw method, and claims Education should balance the 3 levels, yet is not sure what Transfer looks like!

Claims here in 2019 Interview with Dan Jackson.

Other Researchers Contradict Hattie:

With Hattie's clever twist of 3 levels of learning, it is difficult to pin him down on his claims. Although, as a result of this, Proulx (2017) identifies a major contradiction in Hattie's work (see below).

Also, some reliable evidence based organisations contradict his initial claim that PBL is a "disaster".

The largest educational research organisation the "What Works Clearing House" which has 20+ independent Professors and a strict quality control, list "asking deep explanatory questions" as one of its highest recommended evidence based strategies - see here.

Also, Professor Gene Glass, who also invented the meta-analysis method, wrote a book contradicting Hattie, '50 Myths and Lies That Threaten America's Public Schools: The Real Crisis in Education'.

Gene Glass's Myth #50: Schools are wasting their time trying to teach problem-solving, creativity, and general thinking skills; they would be better off teaching the facts students need to succeed in school and later in life.

Professor Glass says,
"It should come as no surprise that when teachers focus on multiple ways of knowing and celebrate the wealth of knowledge their students bring to the classroom, collaborative environments spring up. In these environments, students and teachers participate in meaningful conversation and dialogue that remain a necessary component in teaching creativity and problem-solving. It is through conversation, not didactic instruction, that students are able to articulate what they know and how they know it, while incorporating the knowledge of their peers and their teacher to further their own understanding."
Also, the high performing Finnish system seems to contradict Hattie's result here. The Director-General Pasi Sahlberg outlines in Finnish Lessons 2.0, a major reason Finland performs well in PISA rankings,
"both teacher education and mathematics curriculum in Finland have a strong focus on problem solving, thereby linking mathematics to the real world. Mathematics tasks on PISA tests are based on problem solving and using mathematics in new situations rather than showing mastery of curriculum and syllabi" (p. 77).
Georgios Zonnios has written one of the most insightful analogies which clearly demonstrates the problem with our research based methods on problem/inquiry/discovery learning,
"Decades of research has led many educators to believe that, in general, people learn more effectively through explicit instruction than self-driven discovery. However, this research has consistently lacked clarity on fundamentals and skewed its definitions to fit school systems, rather than fitting the broader goal of enhancing learning." See Full analogy here.
The Mathematics Problem Based activities I frequently use are from the Mathematics Centre and Maths300. In my experience, these are the most engaging maths activities for my students. Doug Williams presented a paper outlining this approach - called 'working mathematicallyhere.
"This Working Mathematically framework... helped us move from problem solving being added to the curriculum as a topic, to working like a mathematician being the essence, the core, the raison d' ĂȘtre, that gives all mathematics content its meaning."
Dr Mandy Lupton analysed the research that Hattie used in detail here. 

Lupton concludes,
"As Hattie’s synthesis is aimed at informing the K-12 sector, it is hard to understand why Hattie included so many higher education studies in his synthesis. In particular, the medical PBL model is so distinct that it would be difficult to see how any K-12 teacher could draw conclusions from these studies for their own practice. 
The studies have different effect sizes for different contexts and different levels of schooling, thus averaging these into one metric is meaningless.
I was able to obtain all eight sources used by Hattie in his synthesis. The studies Hattie includes are almost all studies of medical education. Of the eight studies, five investigate higher education medical curriculum (i.e. training medical doctors) (Albanese & Mitchell, 1993; Dochy, Segers, Van den Bossche, & Gijbels, 2003; Gijbels, Dochy, Van den Bossche, & Segers, 2005; Vernon & Blake, 1993). Of the reminder one is in a higher education nursing context (Newman, 2004), and one is in a range of higher education disciplines (Walker & Leary, 2009).
Only one study (Haas, 2005) is in a school context (secondary school). Haas’ study examines a range of teaching methods for secondary algebra, where PBL is addressed along with other methods such as cooperative learning, communication and study skills, and direct instruction."
Claes Nilholm (2013) in It's time to critically review John Hattie confirms Lupton's analysis,
"Hattie reports seven meta-analyses that together provide weak support for problem-based learning. Most of these are meta-analyses of studies of problem-based learning at university level. However, a meta-analysis of problem-based learning in mathematics (algebra) at a high school shows good effects. Thus, if you go to the average value that Hattie reports, you may end up with completely incorrect conclusions and miss that problem-based learning can be effective in the correct context. 
Hattie's major failure is to report summative measurements of meta-analysis without taking into account so-called moderating factors. Working methods can work better for a particular subject, a certain grade, some students and so on. Hattie believes that the significance of such moderating factors is less than one can think. I would argue that they are often very noticeable, as in the examples I reported. 
Unfortunately, it is my experience that compilations of research to be useful in school are often more confusing than informative. It is common practice to draw far-reaching (and sometimes wrong) conclusions from the research and to underestimate the difficulty of transferring research results to the teaching practice. Who is responsible for the consequences of that?" (p. 3).
But How is Achievement Measured?

Hanne Knudsen's interview with Hattie (John Hattie: I’m a statistician, I’m not a theoretician) on this is interesting, Hattie says,
"If you are doing surface learning it works quite differently than if you are doing deep learning. One example is problem-based learning, which comes out with a very low effect size. The reason for that is that problem-based learning only works for deep learning; it doesn’t work for surface learning. And 90% of the schools introduce problem-based learning for surface learning, so of course it doesn’t work. Learning means moving from surface to deep to transfer" (p. 6).
Proulx (2017), Critical essay on the work of John Hattie for teaching mathematics: Entrance from the Mathematics Education, identifies the inherent problem here,
"... ironically, Hattie self-criticizes implicitly if we rely on his book beginning affirmations, then that it affirms the importance of the three types learning in education."
He quotes Hattie from VL,
"But the task of teaching and learning best comes together when we attend to all three levels: ideas, thinking, and constructing" (VL, p. 26).
"It is critical to note that the claim is not that surface knowledge is necessarily bad and that deep knowledge is essentially good. Instead, the claim is that it is important to have the right balance: you need surface to have deep; and you need to have surface and deep knowledge and understanding in a context or set of domain knowledge. The process of learning is a journey from ideas to understanding to constructing and onwards" (VL, p. 29).
From this quote, Proulx goes on to say,
"So with this comment, Hattie discredits his own work on which it bases itself to decide on what represents the good ways to teach. Indeed, since the studies he has synthesized to draw his conclusions are not going in the sense of what he himself says represent a good teaching, how can he rely on it to draw conclusions about the teaching itself?"
Thibault (2017) Is John Hattie's Visible Learning so visible? states that (translation to English),
"some comparisons seem to me hazardous: contributions by the pupil, by home, by the school, by the teacher, by the curriculum and teaching approaches. Indeed, it may seem strange to compare the effect of teaching strategies in problem-solving with the number of students per class or again with the number of hours of television listened to by students."
Yelle et al. (2016) What is visible from learning by problematization: a critical reading of John Hattie's work analyse problem-based learning in more detail,
"It is therefore necessary to define theoretically the main concepts under study and to ensure that precise and unambiguous criteria for inclusion and exclusion are established ... 
It should be remembered that meta-analyses aggregate multidisciplinary research, in addition to confounding methodologies of all kinds. We also want to come back to the confusion about the labels Hattie uses. The tripartite division of teaching and learning methods by problematization seems ill-justified, and therefore artificial. The interpretation of these indices therefore requires caution, but the reading that we have made allows us to favor a teaching-learning approach focused on solving problems to achieve our educational goals.
In education, if a researcher distinguishes, for example, project-based teaching, co-operative work and teamwork, while other researchers do not distinguish or delimit them otherwise, comparing these results will be difficult. It will also be difficult to locate and rigorously filter the results that must be included (or not included) in the meta-analysis. Finally, it will be impossible to know what the averages would be.
It should also be noted that Hattie is analyzing these techniques from a closed perspective. Thus, it deals with problem-based learning, for example, without taking into account the other methods used at the same time to support learning: the use of one method does not exclude the use of others in all circumstances. 
At the end of the day, it seems to us that Hattie's aggregated statistics can broaden perspectives and contribute to the synthesis of research in education, but do not invalidate approaches below the 0.40 indicator. 
Therefore, can we qualify Hattie's work of Holy Grail for education? Do these data offer easy, objective and absolute answers because they rely on numbers? 
The answer is no, of course."
More Examples of Hattie's misrepresentation



These studies are referenced in Hattie's database, April -2020.

Vernon & Blake (1993) report effect sizes from 0.55 to -0.18. Hattie only used the -0.18. The authors conclude - 
"PBL was found to be significantly superior with respect to students' program evaluations (i.e., .students' attitudes and opinions about their programs)... 
The comparative value of PBL is also supported by data on outcomes that have been studied less frequently, i.e., faculty attitudes, student mood, class attendance, academic process variables, and measures of humanism... 
The results generally support the superiority of the PBL approach over more traditional methods." (p. 550).
They also caution, that there are significant differences among programs which casts doubt on the generality of the findings across programs. (p. 550).

Newman (2004) is a SINGLE study not a Meta-analysis investigating PBL with graduate nurses. The PBL seemed to be minimal guidance PBL, which differs significantly from other studies where some teacher guidance is used.

The study started with 35 in the PBL group and 35 in the control group. At completion, only 20 were left in the PBL group and 31 in the control. So a significant drop out rate.

Note: Hattie does not report this low number of students in references, why? This study has been given same weight as larger meta-analyses, which is a major problem.

Gijbels et al. (2005) focus on different assessments to measure PBL and postulate that different assessments will get very different results, which is in fact what they find. They state,
"...a valid assessment system would evaluate students' problem-solving competencies in an assessment environment that is congruent with the PBL environment. This means that assessment in PBL should take into account both the organization of the knowledge base and the students' problem- solving skills." (p. 32). 
"In this study, we want to go a step further and investigate the influence of assessment as the main independent variable. The goal of this study is to describe these effects of PBL from the angle of the underlying focal constructs being measured with the assessment... 
it is expected that the effect of PBL, as compared with that of conventional education methods, should increase with each level of the knowledge structure." (p. 36). 
"Regarding the nature of the studies included in our statistical meta-analysis, we used the standardized mean difference effect size (Glass's delta)." (p. 41). 
"...students studying in PBL classes demonstrated better understanding of the principles that link concepts (weighted average ES = 0.795) than did students who were exposed to conventional instruction... students in PBL were better at the third level of the knowledge structure (weighted average ES = 0.339) than were students in conventional classes. It is important to note that the weighted average ES of 0.795, belonging to the second level of the knowledge structure, was the only statistically significant result." (p. 44).
They summarise their results (p. 43):



Hattie seems to miss the aim of the study and reports an effect size of 0.32 (it is difficult to find how he got this). He would usually average the "Unweighted" column which gives 0.37. 

But this study adds further to the argument that different tests give different effect sizes, so comparing effect sizes from different studies using different tests is a major issue. 

This argument has been developed in the peer review under the heading "sensitivity to instruction". Studies show that effect sizes derived from Specific Tests can be 400% larger than those from Standardised Tests - see here.

The other problem is this study uses Glass's delta to get the effect size, this can lead to an effect size One/Fifth of that calculated in other ways - see problem 5 - here.

Others on PBL

Ollie Lovell does an excellent interview of Professor Janet Kolodner on Problem/Project Based Learning - here.

Professor Kolodner proposes that problem/project/inquiry based learning is the way we naturally learn from experience. In addition, the method engages students - listen here.

Bianca Hewes gives detailed reflections on Problem and Project Based Learning and Hattie's model here.

Some positive examples of PBL - https://wegrowteachers.com/13-brilliant-outcomes-of-project-based-learning/

More examples - Project-based learning boosts student engagement, understanding.
"Teachers say they see this deeper learning. Jennifer Masterson, the sixth-grade math and science teacher, said students thrive when given choices about what to research for their projects and how to present their learning. Because the projects all have real-world implications, there are natural audiences beyond the school building for the final products, which makes students take their work more seriously."

My Approach

My own experience shows using a problem to start a topic, allowing students to experiment & make mistakes leads well into some form of explicit teaching. These type of lessons are engaging & fun and encourage other skills like, communication & collaboration with peers. This then leads well into the more traditional explicit type approaches.

An example lesson is Problem Dice, from the MATHS300 list of engaging lessons. 

However, Greg Ashman recently challenged this approach with a great experiment - here. He and his colleagues compared my approach of problem then explicit with explicit then problem. They found the explicit then problem worked best.

Although they do caveat,
"Nevertheless, there may be sufficient evidence in the literature to indicate that problem solving first is effective under some circumstances."

Outdoor Education:

I was an Outdoor Ed teacher for 10 years and much of the outdoor experiences we exposed the students to were of a problem based nature - bush walking, navigating, surfing, rock climbing, etc.

These experiences had significant positive effects on many students. Aspects like appreciation for the environment, wilderness, animals, team work, etc. But, since these are difficult to measure they have been devalued in Hattie's paradigm.

Although, when confronted with the qualities like passion, collegiality, etc (which were the invisible influences that Hattie argued against). He now acknowledges this problem
"Throughout Visible Learning, I constantly came across the importance of ‘passion’; as a measurement person, it bothered me that it was a difficult notion to measure – particularly when it was often so obvious" (VL 2012, preface).
At least there is now some research into this - How can a ‘learn by making mistakes’ approach work for outdoor education teachers?

Cognitive Load Theory & PBL:

Cognitive Load Theory also questions the usefulness of PBL and Inquiry based learning. The idea being PBL and IBL create too great a cognitive load on students as their working memory is small.

Reading the studies on CLT, the aspect that directly got my attention was the narrow definition of schooling - 
"an increase in long term memory"
Whilst CLT is useful, surely schooling is much more than just increasing student's long term memory!

A clear explanation is by Daniel Willingham, a key researcher and adviser to the Evidence Organisation, Deans for Impact in his influential book "Why Don't Students Like School".

Willingham says there is "a big gap between research and practice" and influences "cannot be separated in the classroom" as "they often interact in difficult-to-predict ways." He provides the following example, 
"... laboratory studies show that repetition helps learning, but any teacher knows that you can’t take that finding and pop it into a classroom by, for example, having students repeat long-division problems until they’ve mastered the process.  
Repetition is good for learning but terrible for motivation. With too much repetition, motivation plummets, students stop trying, and no learning takes place. The classroom application would not duplicate the laboratory result." (introduction).
PBL and IBL engage students, give them control over their learning, provide real life and interesting problems to solve; but may create a cognitive overload as they may wrestle with these questions for a long time.

There are many examples. I've just read the biography of the great Australian Art historian, Robert Hughes. 

Hughes describes being taken to a Sydney Art gallery as a teenager. His teacher noticed that Hughes scoffed at an impressionist painting and Hughes exclaimed this is not Art!

The teacher retorted, "then what is Art Robert?"

Hughes writes that that one question engaged him for the rest of his life!

DIRECT INSTRUCTION $30m literacy program fails to boost results for remote Indigenous kids

2 comments:

  1. Problem solving is not the same as problem based learning

    ReplyDelete
  2. agreed, Hattie combines all these different studies as if they are the same. That's one of the major points I making. You see this problem in ALL the influences that Hattie reports an effect size for. Take a look at the disparate studies Hattie used for Feedback as an example.

    ReplyDelete