Mastery Learning – getting the foundations right

In the 12th Century, construction of a bell tower behind the Pisa Cathedral began. Due to it being built upon soft soil with only 3m of foundations, the tower began to subside on one side during the construction phase. Despite this, the bell tower was eventually completed nearly 200 years after starting. However, each year the tower increased its tilt by 1mm, until in 2001 it got to the point of no return. Had work not been carried out to correct the foundations, the tower would have collapsed under the immense pressure being exerted on it. Although it has now been corrected (to an extent), engineers believe that in a couple of centuries, it will likely be at a point where correction will need to be made again.

Isn’t this interesting? Due to building upon poor foundations, the building will never last without regular intervention…

Reminds me a bit of education. Before I make the link more explicit, let me digress to a term that is bandied around a lot in education – MASTERY LEARNING. Sounds pretty awesome, indeed I imagine you’ve heard a consultant, manager or colleague throw the term around in an attempt to sound awesome and you’ve no doubt thought to yourself… they’re awesome! For those of you that don’t know what it is, here are a couple of definitions:

The Wikipedia definition cites:

‘Mastery learning (or, as it was initially called, “learning for mastery”) is an instructional strategy and educational philosophy, first formally proposed by Benjamin Bloom in 1968. Mastery learning maintains that students must achieve a level of mastery (e.g., 90% on a knowledge test) in prerequisite knowledge before moving forward to learn subsequent information. If a student does not achieve mastery on the test, they are given additional support in learning and reviewing the information and then tested again. This cycle continues until the learner accomplishes mastery, and they may then move on to the next stage.’

Slavin defined mastery learning as:

‘The principal defining characteristic of mastery learning methods is the establishment of a criterion level of performance held to represent “mastery” of a given skill or concept, frequent assessment of student progress toward the mastery criterion, and provision of corrective instruction to enable students who do not initially meet the mastery criterion to do so on later parallel assessment.’

Isn’t this just good teaching?

In maths, would I allow learners to move onto percentages if they can’t perform division and multiplication?

In anatomy and physiology, would I allow learners to move onto the energy systems if they didn’t understand the structure and functions of the respiratory system?

Of course not. Without sufficient underpinning of the foundation knowledge, then I’d be setting them up to fail by introducing new concepts.

Let me go back to the leaning tower – had the builders established it upon a solid layer of soil and with much deeper foundations, it is unlikely that their successors would be required to save the damn thing every couple of hundred years. So with teaching, if we spend time getting the basics right before moving on to more advanced things, perhaps our successors won’t need to go back over the foundations.
For all their faults (according to others, not me), the EEF actually inform us that mastery learning can improve achievement by 5 months. They state that:

  1. Overall, mastery learning is a learning strategy with good potential, particularly for low attaining students

  2. Implementing mastery learning effectively is not straightforward, however, requiring a number of complex components and a significant investment in terms of design and preparation

  3. Setting clear objectives and providing feedback from a variety of sources so that learners understand their progress appear to be key features of using mastery learning effectively. A high level of success, at least 80%, should be required before pupils move on

  4. Incorporating group and team approaches where pupils take responsibility for helping each other within mastery learning appears to be effective.

Whilst I understood points 1, 3 and 4 (features of good teaching), I was a little perplexed by point 2, so investigated this further. In one of the cited articles looking at the impact of a mastery maths programme, the following was stated:

‘Typically, mastery approaches involve breaking down subject matter and learning content into discrete units with clear objectives and pursuing these objectives until they are achieved before moving on to the next unit. Students are generally required to show high levels of achievement before progressing to master new content. This approach differs from conventional approaches, which often cover a specified curriculum at a particular pre-determined pace.’

I’m not convinced that this is dissimilar to conventional approaches. Sure, there is often a lot of content to cover in most qualifications, but good teachers know how important it is to master the basics before moving on. The EEF go on to add that:

‘In addition to the ‘mastery curriculum’, other features of the approach include a systematic approach to mathematical language (see Hoyles, 1985; Lee, 1998), frequent use of objects and pictures to represent mathematical concepts (see Heddens, 1986; Sowell, 1989), and an emphasis on high expectations (see Dweck, 2006; Boaler, 2010).’ 

Hang on… so what was being measured in this study? Was it the impact of mastery learning, language use, dual coding or high expectations, or…all of the above? At this point I was confused, but I did note that these are, what I would call, characteristics of good teaching.


As can be seen, mastery learning is a bit of an en-vogue concept with, in some cases, a lack of clarity. In reality, it is a sign of good teaching – ensuring that the foundations are right before moving on.

Why do we ignore the evidence in FE?

Evidence based practice has been somewhat of a revelation to me and my practice. I don’t take everything as gospel, but do look at the strategies that time and time again have shown to be effective. If I think they could work for me in my setting, then I will try to adopt them – why wouldn’t I?


The problem is, Further Education and Skills, moreover, external organisations (Ofsted), agencies and training companies promote practice that is not always informed by evidence. In fact, they promote quite the opposite. Let me give some examples:


Example 1: Individualised Instruction – On so many occasions have I heard comments like this: “there was not enough personalised learning in the session” or “learners were working at the same level and pace so the lesson did not meet their needs”. I’ve even uttered similar things myself (more to conform with expectations than actually believing it). I regularly hear of top-down expectations in sessions for learning to be differentiated to meet all learner needs through learning outcomes and learning activities, but in terms of opportunity cost, evidence shows that this is largely ineffective (not including special education):

‘Individualising instruction does not tend to be particularly beneficial for learners…the average impact on learning tends overall to be low, and is even negative in some studies, appearing to delay progress by one or two months.

This is not to say that differentiation isn’t important. I have blogged my views previously and agree with a lot of Amjad Ali’s post on differentiation. Both posts show the importance of teaching to the top and supporting all to get there. For this to occur, you need to respond to what is in front of you at that point in time. No amount of planning for individualised learning activities will do this in my opinion.


Example 2: Student Control Over Learning – ‘Learner autonomy’, another term bandied around freely without considering the evidence. Do learners really know what they need to know? I suggest not and the evidence supports this, with Hattie finding an effect-size of 0.04 – negligible. This links with the aforementioned really, giving a range of task choices is probably not going to add much value to the session, despite what you may be told.


Example 3: Raising Aspirations – If I hear aspirational target one more time!… I get it, I totally do. Those that are disadvantaged should be supported to overcome these dreadful statistics:

‘33.5% of pupils eligible for FSM achieved at least 5 A*- C GCSEs (or equivalent) grades including English and mathematics compared to 60.5% of all other pupils. This is a gap of 27.0 percentage points.

36.5% of disadvantaged pupils achieved at least 5 A*- C GCSEs (or equivalent) grades including English and mathematics compared to 64.0% of all other pupils, a gap of 27.4 percentage points’

However, trying to raise aspirations isn’t the answer. Though the evidence here is limited, it does show that there is no causal link between aspiration and attainment. I’ve said before that we’ve gone target setting mad. A key comment taken from the report which certainly applies to FE and Skills is:

‘The attitudes, beliefs and behaviours that surround aspirations in disadvantaged communities are diverse so generalisations should be avoided.’

I am not saying don’t encourage learners to aspire to be better, but be wary of any cross-school/college interventions or strategies, particularly when there is a new ‘buzzword’ attached.


To summarise, the aforementioned information is not fact, but evidence suggests that we need to be wary of these common and encouraged practices that actually have little impact according to evidence. My next post will focus on what we should pay more attention to – the strategies that have demonstrated a positive impact on learner achievement.


The cost of peddling bad practice

During a recent Inspection, a friend of mine received a feedback email with the key themes that were being identified in lessons by Inspectors. One of the areas for development was that there were not enough lessons with differentiated learning objectives (i.e. all, most some). This information was relayed to all staff members and of course the message was clear; differentiate learning objectives in all sessions going forward. All due to the comments of an ill-informed inspector.


Aside from the fact that it has been made clear that Inspectors should not prescribe a particular style of teaching, let’s just focus on the problems with differentiated objectives. In summary they can:

  • Lower the bar for students, which in turn widens the achievement gap between the higher and lower ability.
  • Label students unnecessarily.
  • Result in a lack of clarity with the learning intentions.

I’m ashamed to admit that I was once an advocate of the differentiated learning objective. I would spend ages writing my lesson objectives as red, amber and green, and share them with students on a neatly presented handout which they would tick as they went through the session. On the surface this looked great – clear differentiation, learner autonomy and self-assessment. But actually if you probed a little, you would find I was probably doing more harm than good. Those learners with little self esteem/confidence would always attempt the lowest standard. Regardless of the topic, regardless of any other factor. Granted, in some cases, they were challenging themselves, but I suggest that in most they were not.


If we briefly examine Hattie’s (2009) work, the following has been suggested::

– Student control over learning has a negligible effect size (.04)

– Individualising instruction for learners has a low effect size (.22), particularly when thinking opportunity cost.

On the contrary, by not labelling students, there is a generally high effect size of .61. This is also corroborated by the work of Carol Dweck (2012) on Growth Mindset, where individuals can develop their abilities through hard work.


I’m not completely against setting personal targets or objectives in the sessions where they lend themselves to doing so. For example, where learners are working towards the completion of different things, using different skills (generally practical tasks i.e hairdressing). In your standard theory session, there really is no need in my opinion (though I am open to hearing counter arguments). Aim for the highest ability and support the others well – as previously discussed in my post on stretch and challenge.


In summary, it is clear that a small ill-informed comment by someone in a position of power can actually have massive ramifications on a whole institution, if not wider. Prescribing and favouring one method over another is wrong – particularly with no evidence base for it.

Off Target?

Everyone loves a target! Or do they?


Before you read on, I’d like to clarify that I’m challenging my own thinking in this post, not that of any institution. I think it’s important to critically assess any method used in the classroom. You will probably finish reading this post with more questions than answers, as did I when writing it.


Whilst studying for my degree, I worked part-time as a Personal Trainer. My clients would come to me for results, whether to lose weight or increase strength. Through experience of working with a diverse range of individuals and a wealth of knowledge, I was able to set targets for my clients that were challenging and realistic, and in many cases they resulted in great achievements.


In the context of learning, Martinez (2001) informs us that setting targets;

“involves identifying a number of actions at a level of detail that is appropriate not only to the learning task, but also to the individual student. This requires a high level of knowledge.”

This corroborates my above point regarding experience and prior knowledge. However Martinez goes on to state that;

“targets need to be negotiated and agreed with the tutor but owned by the learner. This ownership has cognitive, emotional and motivating elements.”

This sounds great, but were I to have asked my clients to set their own targets, I would be very surprised if they could set targets that were both challenging and realistic, primarily due to a lack of knowledge. I’m not sure that I agree that they’d be any more empowered or motivated by setting their own targets either.


Furthermore, good targets are supposedly ‘easily measured’ (Martinez, 2001) – we’ve all heard of the SMART acronym. Weight loss can quite easily be measured, as can strength, but learning, well that is far more complex. Despite this, there currently seems to be a real obsession with getting learners to set their own targets in every lesson. As educators, I believe that we must stop being so reductive.


Setting specific targets can mean that the learning is focussed and narrow. Narrow focussed or specific targets can inspire performance but prevent learning. As a consequence of the effort to meet short-term targets it often is to the detriment of long-term growth according to Ordóñez, Schweitzer, Galinsky and Bazerman (2009). This brings us to the performance/learning findings by Soderstrom and Bjork (2013), who state that learning can occur even when no discernible changes in performance are observed and performance does not necessarily indicate learning. Here lies a problem, particularly with short term target setting; the learners may not actually be benefitting.


Further to the above, Shah, Friedman, and Kruglanski (2002) found that individuals with multiple goals generally only focus on one goal. Attention is also given to goals that are easier to achieve and measure (Gilliland and Landis, 1992). Therefore, if each teacher asked the learners to set a target in each session, in addition to course targets set in pastoral reviews, in addition to any personal (non educational) goals, there is danger of a lack of focus and potentially unethical behaviour i.e. lying, and cutting corners.


Ordonez, et al (2009) argue that goal-setting is over-prescribed, asserting that;

“Rather than being offered as an “over-the-counter” salve for boosting performance, goal setting should be prescribed selectively, presented with a warning label, and closely monitored.”

Didau (2015) suggests that teachers may anchor ourselves to ‘fictions’ and students anchor themselves to given targets, which may be counter productive and actually hinder the progress learners can make. There have been occasions where I have set targets for learners to pass a piece of coursework and they have gone on to achieve a distinction. There has however been times where I have set distinction and learners have just managed a pass. Did the target serve a purpose?


The highly regarded process of formative assessment, which has demonstrated high effect-sizes in practice (Hattie, 2009; Wiliam and Black, 2007) may be seen as a contradiction to my thoughts. In this process, learners monitor their performance and regulate their targets based on their understanding. I’m not saying that learners shouldn’t have a clear learning intention in the lesson, but should this be a personalised target?


I do believe that targets serve a purpose in some situations, but we need to be careful not to let target setting become nothing more than a tick box exercise. The thing is, target setting currently seems to be a thing of the of the moment. In his book, ‘What if everything we knew about education was wrong?’, David Didau (2015) discusses the problem with group biases in education;

If everyone around us believes a thing, we generally decide it must be true.”

These group bias’ tend to do the rounds in education circles, often having a limited evidence base and being justified by comments such as “it works with my students”. Indeed, target setting may work well in some contexts, with some learners, but that doesn’t mean a broad brush approach should be adopted as a result? What do you think?