Forthcoming FE practitioner research related events

In the spirit of collaboration, I’m pleased to share, in chronological order (nearly), some of the key events related to FE practitioner research.

It is imperative that teachers are given the time, space and autonomy to engage with research and other evidence to support great outcomes for their learners. Regardless of the approach adopted, or the philosophy underpinning the event, each of the below events have this broadly similar aim.

I am proud to play a role in the development of the University of Derby’s Further Education Showcase – our annual conference which features important work from our trainee teachers. Trainee teachers get to attend for FREE and delegate tickets are great value:

FE Showcase – Evidence Based Practice #feshowcase19

Date: 12/04/19–evidence-based-practice/

Other important events:

Learning Skills Research Network (LSRN)

Dates: 06/03/19 and 26/06/19

Teaching Education in Lifelong Learning Conference

Date: 24.05.19

#UKFEChat Conference

Date: 15/06/19

ReimagineFE19 Conference

Date: 02/07/19

#FEResearchMeet Bedford

Date: 03/07/19

Society for Education and Training (SET) Annual Conference

Date: 06/11/19

#FEResearchMeet Greater Manchester

Date: 29/11/19

If I’ve missed anything, let me know and I’ll update.

Principle to Practice 4: Retrieval Practice – Expanded Vs Equally Spaced.


My fourth principle to practice blog post focuses on a paper by Karpicke and Roediger III:

Expanding Retrieval Practice Promotes Short-Term Retention, but Equally Spaced Retrieval Enhances Long-Term Retention


What was the paper about?

There is a wealth of research which demonstrates that taking a memory test can improve long-term retention more than repeatedly studying material. In addition to this, spaced practice of material over time has frequently shown to enhance long term retention. Combining these two highly effective methods is known as ‘spaced retrieval practice’.


Expanded retrieval involves attempting to retrieve an item immediately after it has been studied (an immediate first test) and then gradually increasing the spacing interval between successive retrieval attempts whereas equally spaced retrieval practice attempts to retrieve an item at a later point and then at equal intervals. For example:


Despite the expanded retrieval practice being advocated as a highly effective method of improving long-term retention but there are too few studies where expanded retrieval is compared to equal retrieval practice.


What was the aim of the paper?

The research paper aims to determine the impact of ‘expanding retrieval practice’ (i.e. increasing the time between retrieval episodes) compared to equally spaced retrieval practice.


What did they do?

This research involved two core experiments, both of which involved subjects studying vocabulary word pairs and then taking tests over a period of time but spaced according to several different schedules (massed, equal and expanded practice). A third experiment sought to determine the impact of initial v delayed testing regardless of the interval spaces.

Experiment 1

Forty-eight Washington University undergraduates, ages 18 –22 years, participated.

In three conditions, subjects studied a vocabulary word pair and then took three subsequent tests over that pair.

Massed condition: subjects studied a word pair and took three consecutive tests

Expanding condition: one trial occurred between the study trial and the first test, five trials occurred between the first and second tests, and nine trials occurred between the second and third tests (1–5–9).

Equally spaced condition: five trials occurred between the study trial and subsequent test trials (5–5–5).

Participants were not provided with any feedback in the conditions.

Experiment 2

Forty-eight Washington University undergraduates, ages 18 –22 years, participated.

The procedure was nearly identical to the above, except that subjects were given feedback about their responses after test trials.


Experiment 3

Fifty-six Washington University undergraduates, ages 18–22 years, participated.

The purpose of Experiment 3 was to separate the effects of delaying the first test and the schedule of repeated tests to examine whether expanding the schedule of repeated tests is the key factor for enhancing long-term retention.

Four repeated test conditions were used to separate the effects of spacing the first test (immediate vs. delayed) and the effect of the schedule of repeated tests (expanding vs. equally spaced).

Experiment 3 clearly shows that delaying an initial retrieval attempt represents the important difficulty for enhancing learning, rather than expanding the schedule of repeated tests



The results show that equally spaced retrieval leads to better long-term retention than expanding retrieval practice. Equally spaced practice leads to better long-term retention because the condition involves a first test after a brief delay, and the greater effort involved in the initial test enhances later retention.


What is the key principle of the paper?

Difficulty in retrieving = good!

Delaying an initial retrieval attempt (as is done in equally spaced retrieval practice conditions) promotes long-term retention by increasing the difficulty of retrieval on the first test.


What does this look like in practice?

Following initial learning, retrieval of this should be delayed and further retrieval practice equally spaced. Below is an example in GCSE Geography using the principles of this paper:


*As with all posts, please let me know if you feel that I have misunderstood anything.

Principle to Practice 3: Expertise Reversal Effect and Worked Examples


My third principle to practice blog post focuses on a paper by:

Salden, Aleven, Schwonk and Renkl (2009)



What was the paper about?

Just a couple of definitions before we start:

Expertise reversal effect: as learners become more knowledgeable in a domain, guided instruction becomes less impactful and may actually hinder learning (see the work of Slava Kalyuga for more information).

Procedural knowledge: knowing about processes and procedures (e.g. how you might calculate ½ + ¾ = 1 1/4).

Conceptual knowledge: knowing why certain processes work; understanding the relationship between things (e.g. I also know that ½ = 0.5 and ¾ = 0.75, therefore 1.25 = 1 1/4).

There is a wealth of research to support the use of worked examples to support problem solving, particularly with novice learners because it helps to reduce the overload on working memory. The expertise reversal effect is a notion that suggests that worked examples are more favourable in earlier stages of learning, while problem solving could be more effective in later stages. The authors suggest that a key implication of expertise reversal for instructional design is that worked-out steps should gradually be ‘faded’ from worked examples, rather than jumping from fully worked examples to independent problem solving. They inform us that worked examples need to be adapted to the learner’s level of expertise and only when there is sufficient understanding of key principles should they progress to problem solving, but determining the transition point is a challenge,


What was the aim of the paper?

The authors ask the question, when is it more effective to provide assistance (e.g., example solutions), and when is it more effective to let the learner try to generate or construct solutions for themselves (or with lower levels of assistance)?

They address this question by investigating the effectiveness of different worked example fading within an instructional approach using cognitive tutor software. It was hypothesized that an adaptive fading process would lead to better learning than independent problem solving and fixed fading worked examples (more on the difference between these below).


What did they do?

Two experiments were conducted (lab and classroom based) using a Cognitive Tutor (computer software) where three conditions were compared: Problem Solving, Fixed Fading Worked Examples and Adaptive Fading Worked Examples. Problems were sequenced from simple to more complex, with one-step problems presented first (procedural), followed by two-step problems, and eventually by three-step problems (conceptual).

This experiment was conducted twice. Once in a lab (n=57; 14-16 years old) and once in a vocational school classroom (n=20; 14-15 years old). The students were randomly assigned to one of three conditions:

In the Problem Solving condition, all steps of all problems were ‘pure problem solving’ with learners working independently to progress from procedural to conceptual questions.

In the Fixed Fading condition learners started out with fully worked examples, with example steps gradually being faded in subsequent problems until, in the last two problems, all steps were pure problem solving (conceptual).

In the Adaptive Fading condition, the presentation of worked steps was the same as in the fixed fading condition up until more challenging problems. Once students reached those problems, any step could be presented as either pure problem solving or worked-out step, depending on the student’s performance in explaining worked-out steps in earlier problems.

Immediate and delayed test (1 and 3 weeks respectively) were administered.

 Experiment 1 – Lab based

Results indicate that adaptive fading was more effective than the other conditions on both immediate and delayed post-tests. Additionally, the adaptive fading condition required fewer worked steps than the fixed fading condition, indicating that students’ knowledge levels increased faster in the adaptive condition.

Experiment 2 – Vocational classroom

Results indicate that the adaptive fading was more effective than the other conditions on the delayed post-test. Additionally, the adaptive fading condition required more examples than the fixed fading condition, which possibly indicates that overall the students’ knowledge levels increased slower in the adaptive condition.


What is the key principle of the paper?

Adaptive faded worked examples are likely to improve novice learners’ procedural and conceptual knowledge more than worked examples alone or problem solving.


What does this look like in practice?

Below is an example of four problems related to training thresholds. As you will see, the worked examples are faded to support learners in the first instance, but encourage greater independence as they develop their understanding.


Depending on the quality and understanding of learners’ responses on problem 2, they may be encouraged to move directly to problem 4, or continue with problem 3. This (sort of differentiation) is a demonstration of adaptive fading. Problem 4 requires more of a conceptual understanding as, unlike the other problems, the learners need to make their own decision about the threshold based on their understanding of the goal.P2P3b

This approach can be used across a range of subjects and levels. We know that worked examples support novices, but if there are ‘relative’ experts in the classroom, be mindful of the expertise reversal effect and therefore, consider adaptive fading with the worked examples.

Once again, if I have misunderstood anything, feel free to let me know. If you have any examples you’d like to share, please leave them in the comments below.

Principle to Practice 2: The Worked Example Effect, Generation Effect and Element Interactivity


My second principle to practice blog post focuses on a paper by Chen, Kalyuga and Sweller

The Worked Example Effect, the Generation Effect, and Element Interactivity

What was the paper about?

Research exploring the worked example effect has demonstrated that model examples which provide full guidance on how to solve a problem more often results in better test performance compared to providing no guidance during problem-solving. Before we go on, it is important for us to define (in lay terms) some of the key terms used in this paper:

  • Worked examples are model step-by-step processes that provide full guidance to learners on how to solve a problem.
  • The generational effect is the requirement for learners to be actively involved in the generation of their own understanding of material with little guidance from a teacher (e.g. a problem solving task).
  • Element interactivity focuses on the complexity of new learning material in relation to prior knowledge and the external environment. For example, if given the problem x-3 = 5, novice learners may need to handle (x, -, 3, =, 5) as separate components in working memory – this has a high element interactivity. Whereas more expert learners are more likely to know the process of calculating this type of equation (add 3 both sides and why), thus a low element interactivity (example adapted from the article).

The authors use cognitive load theory as a lens for their research. They assert that information is stored in the form of schemas in long-term memory and that working memory has limited capacity but draws heavily on long-term memory to ease the working memory burden. When presented with novel and complex problems to solve (high element interactivity), if there is insufficient prior knowledge in long-term memory, the working memory can easily become overwhelmed and so it relies on something called the ‘borrowing principle’ e.g. expert instructions and/or worked examples to ease the burden. In contrast, when the problem can be solved by utilising long-term memory resources (low element interactivity), the borrowing principle becomes redundant.


What was the aim of the paper?

The authors sought to explore the benefit of worked examples with increasing expertise. They suggest that the advantage of worked examples may decrease or even reverse to a disadvantage because with increasing expertise, studying worked examples becomes a redundant activity. Furthermore, increases in expertise should have the same effect as decreases in element interactivity.


What did they do?

This research involved two experiments:

Experiment 1 investigated the relationship between levels of guidance and levels of element interactivity using 33 Year 4 primary school learners studying geometry topics that were either high or low in element interactivity for these students. High-element interactivity materials were used to test for the worked example effect by comparing studying worked examples (high guidance) with problem solving (low guidance). Low-element interactivity materials were used to test for the generation effect by presenting learners with answers to memory questions (high guidance) or having them generate answers themselves (low guidance).

It was hypothesized that high guidance (worked examples) would be superior to low guidance (generated problem solving) using materials high in element interactivity, whereas low guidance was predicted to be superior to high guidance with materials low in element interactivity. The results of Experiment 1 confirmed this hypothesis.


Experiment 2 also tested for an interaction between guidance and element interactivity with older, more expert learners using similar materials to those of Experiment 1. It was hypothesized that the interaction should be reduced or eliminated using students who had a reduced requirement for worked examples (high guidance). 36 Year 7 students were randomly assigned to groups using the procedure of Experiment 1. All students had previously studied the area and perimeter formulae used in this study to test for the generation effect. Similarly, all students had been taught to solve the problems used to test for the worked example effect approximately a year previously. Therefore, Year 7 students were regarded as relative experts with respect to the formulae as well as the problems used in Experiment 2.

Results of this experiment supported the hypothesis that the worked example effect reversed with increases in expertise. Increased guidance had a similar negative effect on both higher and lower element interactivity material. In other words, in contrast to Experiment 1, the generation effect was better for both lower and higher element interactivity material.


What is the key principle of the paper?

Low guidance during instruction (the generation effect) is more effective for knowledgeable learners with expertise. High guidance (the worked example effect) is more effective for novice learners.


What does this look like in practice?

Novices need more guidance and are more likely to benefit from worked examples to chunk new learning, thus supporting their understanding. The example below shows what a typical ‘non-worked example’ task sheet is like, compared to a ‘worked example’, with notes:


Click image to view larger

*This is a simple example in maths. Worked examples can be used across a range of subjects and a quick Google will reveal them in subjects such as English, PE and Geography to name a few.


To conclude, when you initially assess learner knowledge, where they are relatively novice, consider how you will support them to more effectively learn new content by using worked/model examples. If dealing with relative experts, consider using more problem based tasks (more to follow on this).


Once again, if I have misunderstood anything, feel free to let me know. If you have any examples you’d like to share, please leave them in the comments below.




Principle to practice: the split-attention effect


After Mike Tyler’s excellent presentation at the recent #FEShowcase18 where he explained Soderstrom and Bjork’s work and made clear links between their principles and practice, I was inspired to explore a range of research articles on learning and memory and present it in a similar fashion. I’ve been off the blogging scene for a while and hope to reinvigorate the blog by producing at least one of these short posts per week over the summer period, using research to make clear and practical suggestions to support teachers.

Here’s my first one by Chandler and Sweller (1992):


What’s the paper about?

This paper centres on the impact of instruction. Using cognitive load theory as a basis for the work, the authors argue that many methods of instruction are ineffective as they involve greater extraneous load (irrelevant things that impact on our very limited working memory e.g. fancy presentations). When information is presented by two different sources, e.g. text and diagrams, learners have to split their attention. This can cause extra ‘load’ in working memory as they are having to make sense of two different sources of information. The authors call this the ‘split-attention effect’.

What was the aim of the paper?

Essentially, the aim was to determine the impact of the split-attention effect on learning

What did they do?

Their research involved two experiments:

  • Experiment 1 compared conventional text and diagram instructions (e.g. the text instructions were above a diagram they related to) with physically integrated instructions (the text instructions were integrated into the diagram) for 20 engineer apprentices learning a milling process. Post-test results for the integrated instructions were considerably higher than the conventional group.
  • Experiment 2 focused on 20 Psychology students who had to answer questions on a traditional research paper versus a paper with methodology and result section integrated. Once again, the integrated format was far more effective than conventional formats.


What is the key principle of this paper?

Not only should diagrams and text be integrated, the evidence is strong that learning can be enhanced by integrating mutually referring, sources of purely textual information (e.g. the method and results section of a paper).

What does this look like in practice?

The example below shows the flow of blood through the heart. Image A is an example of a typical worksheet that one might find in a classroom. This requires the learners to split their attention back and forth between the image and the text. Image B on the other-hand is integrated; the text accompanies the diagram and this reduces the unnecessary load on working memory as learners do not have to switch between the text and diagram.

Image A
Image B

So… how might you use the split-attention effect research to support your teaching?

*I’m happy to be corrected on any misunderstanding. Feel free to comment.

Interpolated Testing – What is it exactly and what are the benefits?

It seems I may have misunderstood interpolated testing in a recent blog post. I assumed, by definition, that interpolated testing meant that there was switching between new and old learning in the testing of learners (or quizzing).  For example, a typical starter quiz where a teacher would ask questions on previous learning, whilst also assessing the learner understanding on the new.


This understanding was corrected (or confused further?) in a recent lecture on interpolated testing by Dr Philip Higham of the University of Southampton. The talk was fascinating and raised several more questions I wish to consider, particularly in relation to the conflict between desirable difficulties and cognitive load theory (more to follow on this).


The aptly titled ‘PowerPointless’ began with Philip espousing desirable difficulties and the “metacognitive illusions” that exist in learning – massed practice, fluency, lecturer style, testing is bad etc.


Philip then explained a series of experiments that he has been working on with a PhD student. Each of these investigating the impact of slide handouts during lectures. Six lab-based experiments were conducted and pre-recorded lectures given to various groups:

  • Group A – The control group where learners were asked to observe the lecture without taking notes
  • Group B – This group were provided with lecture slides and could annotate these as they wished
  • Group C – This group took notes from the slide for themselves
  • Group D – This group were asked to take notes as if they were for a friend (it was suggested that the notes would be clearer and better organised through doing this)

The various experiments changed variables such as speed of presentation, fluency of presentation and used various topics. Learners were tested immediately after each experiment and then sat a delayed test one week later.


Results consistently revealed that note taking (of any kind) was significantly better than not taking notes and using the slides provided for the lecture (Group A and B). This is significant for all teachers who provide a copy of slides to their learners. Think about how you can encourage learners to take their own notes during sessions – I wrote a blog about the Cornell method a few years back that may be of use for this.


The experiments progressed further, and Group E was added; these individuals were the ‘interpolated testing’ group. This group experienced the lecture in short intervals of around ten minutes before being asked to generate their notes in a retrieval type manner (this is what is referred to as interpolated testing – and the literature that I have read to date typically uses this approach. A short introduction to new learning immediately followed by a retrieval of this new learning). As a side note, is this not just part of what teachers do for formative assessment? (perhaps formative assessment is so effective due to the retrieval aspect rather than feedback?).

The results showed that the retrieval and generation of notes (Group E) had an improved impact on immediate and delayed (1 week) test results compared to other groups.


All groups were then provided with 8 weeks of revision time using the same lecture handouts containing all answers. Following this, they were tested on the material. Results showed no significant difference between those that took their own notes and those that did the interpolated testing. However, the results did show that the quantity of time that learners revised for during the 8 weeks, was significantly lower for those that did the interpolated test (Group E).

These findings are significant:

  • Taking your own notes is highly effective for improving long term retention of information
  • To reduce study time, learners are better off learning via interpolated testing


It is worth noting that much of the literature is positive (see Szpunar et al) on interpolating, specifically for improving long term retention and motivation of learners. The reasons suggested by Davis et al for interpolated testing not being as effective as first thought, is due to:

  1. The learners spending more time thinking about the prior learning and correcting this, over moving to the new learning
  2. The more times that there is switching between old and new learning, the more the task switching effect will occur, thus impeding the new learning

In spite of this, there is a suggestion that ‘test potentiated learning’ (recalling prior knowledge) is actually beneficial to learning and this supports the acquisition of new information. This obviously needs further study, but suggests that teachers need to be mindful of how often they are switching between the delivery of new learning and the retrieval of it during sessions.


*NB. These were some of my notes and inferences from the lecture. Data and information is my interpretation of that which was shared. A huge thanks to Dr Philip Higham for sharing this information and challenging my thinking.

I would like to explore my initial thoughts on interpolated testing a little further, as I expected a delayed retrieval, rather than retrieval immediately after encoding. This allows for greater forgetting which one would think is better… anyway, more thought needed on this. 

Retrieval and encoding: getting off to a good start

Since developing my understanding of formative assessment and retrieval practice, I have always attempted to ensure that my lessons begin with a clear recap of prior learning (retrieval), coupled with questions about forthcoming learning which allow me to identify gaps in knowledge to support my delivery. It makes sense to kill two birds with one stone, but alas, I may have been doing this wrong for some time.

Before I continue, it is at this juncture that I feel the need to distinguish between two distinctly separate processes that I will be talking about in this post:

‘Encoding is the process of moving information into your long term memory (LTM) via your working memory (WM) i.e. learning new things

Retrieval is the process of pulling information out of your long term memory and into your working memory. This strengthens the memory in the long term.’

(Definitions courtesy of Adam Boxer)

Last week educational psychologist Daniel Willingham shared a research paper (here) arguing that frequent switching between retrieval practice and encoding impairs new learning – even Willingham himself appeared shocked in his tweet.


It is clear from the paper that whilst retrieval practice strengthens learning, it is reported that ‘interpolated’ testing can sometimes impair new learning. In essence, having a starter quiz with mixed learning (prior and new) may not actually be as useful as one might have thought.

I have been able to make some sense of this thanks to a blog by Adam Boxer (here), who reminds me of some of the key differences between encoding and retrieval:


Now, whilst a quiz at the start of a session isn’t really going to support encoding during the asking of questions, it will create extra cognitive load when the teacher clarifies the responses. Moving between retrieval of old information and encoding of new creates an undesirable extraneous load, due to the effort required to switch between the two distinctly separate processes (encoding and retrieval). Furthermore, it is suggested by Davis et al (2017) that mixing retrieval practice with encoding might bias learners’ attention towards relearning the old information, thus impeding new information being learnt.

As a result of this information, it is suggested that retrieval practice (i.e. recapping prior knowledge) be done separately to the delivery of new learning. Your learners may benefit from a separate recap quiz and initial assessment, or by omitting the initial assessment altogether and just concentrating on retrieval prior to the delivery of new information… I’ll certainly be revisiting my practice.

Intro to Working Memory and Cognitive Load

I don’t get the time to update the blog much these days, but here’s some of my slides from a session I did last week with trainee teachers. If they are of any use to you, feel free to use:


Special thanks to Oliver Caviglioli for his design work that inspired the design of these slides.

Feedback and what good ‘looks like’

I’ve been thinking a lot about feedback lately and reminiscing on my younger days as a sports coach. When introducing a new skill to an individual, it was imperative that I could model, or show an example of what good looks like, otherwise learners would simply not know what they were aiming to achieve.


Learning something new is really challenging, it becomes more so if we don’t know what good ‘looks like’. I’m not an engineer, but let’s take the example of learning a fillet lap weld. Without seeing what a good fillet lap weld looks like, it would be nigh on impossible for a learner to do one successfully. Take the correct use of apostrophes – without seeing the various uses of an apostrophe, one simply wouldn’t know know how to use it.


Just knowing what good ‘looks like’ isn’t enough to learn something effectively however. Along the way to mastering a fillet lap weld, or correct apostrophe use, there’ll no doubt be mistakes made. This is where feedback is essential. According to Ramaprasad (1983, p.4) ‘feedback is information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way’. In other words, feedback should identify the strengths and weaknesses of performance in relation to what good ‘looks like’. But is it that simple?


No. In 1996, Kluger and DiNisi explored the effects of feedback on performance. Their meta-analysis revealed that on average, feedback improved performance but bizarrely, in over a third of cases, feedback actually impeded performance. Upon further exploration, their work revealed that the more effective feedback focussed on the quality of the work (task-oriented), rather than the person (ego-oriented). In other words, focus was on the strengths and areas for development of the work, rather than assigning numbers or grades to the work, which allow for comparisons between learners. In addition to this, they found that more effective feedback focussed on what and how the individual could improve their performance (the future), rather than focussing too much on the performance itself (the past). I liken this to the analogy of driving a car. If we focus too much on what we can see in our rear view mirror, we’ll probably crash (image 1). Whereas, if we acknowledge our mirror, but focus our attention on the road in front, we’re more likely to be moving forward positively (image 2).

Similar findings were noted in the work of Hattie and Timperley (2007); they determined that feedback was best served with clear goals for improvement. If we think back to my above mentioned point about knowing what good ‘looks like’, if feedback is provided in relation to a good example of a fillet lap weld and looks at how current work could be developed to achieve a good standard, then it is more likely that the learner will make improvements.


The thing with feedback is that it becomes extremely challenging for a teacher to provide 20-30 learners with regular individual feedback in a session. Here’s the thing, you don’t need to. Once learners are clear with what good ‘looks like’, there are 20-30 other resources at a teacher’s disposal, so why not ask them to provide feedback to one another?


Some common methods to do this are identified in Petty’s (2009) fantastic Evidence Based Teaching book. One of his diamonds is the ‘medal and mission’ approach – very simple, yet also very effective. Firstly task centred information is provided to the learner in relation to the goals (what good ‘looks like’) – the medal. Following this, learners are given a clear target for improvement in relation to the goal – the mission. For example:


‘Jamal, you have clearly fit-up the plates accurately and your weld indicates that the distance to the joint was good, as the arc is the correct depth (medal). If you look at the model example, the bead size is slightly larger. To increase the size of the bead, you need to decrease the speed that you move along the joint. In your next attempt, continue in the same manner as before, but with a slightly slower speed’ (mission).


Similar approaches that may be used include:

  • 2 Stars and a Wish – useful for peer assessment, the learners give one another 2 stars (i.e. 2 things they think their peer has done well in relation to what good ‘looks like’) and a wish (i.e. something they wish could be improved upon in relation to what good ‘looks like’).
  • WWW/EBI – as before, this acknowledges the past – What Went Well (in relation to what good ‘looks like’), before looking to the future with clear guidance for improvement, Even Better If…(in relation to what good ‘looks like’).


Whilst peer feedback is really useful, it is worth noting the limitations of the above approaches. Indeed, Nuttall (2007) acknowledges that around 80% of feedback in a typical classroom is between peers, yet around 80% of that feedback is inaccurate. If we can provide suitable structures, such as the above, and ensure that clear success criteria is provided (what good ‘looks like’), then we improve the effectiveness of peer to peer feedback.


To summarise, if we really want to maximise feedback in classrooms, we need to ensure the following:

  • Everyone is clear with what good ‘looks like’
  • Feedback looks forward and not back
  • Feedback focuses on the task and not the person
  • Feedback involves everyone



Hattie, J. and Timperley, H. (2007). The power of feedback. Review of Educational Research. 77 (1), p. 81-112.

Kluger, A.N. and DiNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), p. 254-284.

Nuthall, G. (2007). The Hidden Lives of Learners. NZCER Press

Petty, G. (2009). Evidence Based Teaching. Cheltenham: Nelson Thornes.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.

Think about thinking hard

I recently stumbled across this statement in Coe’s excellent ‘Improving Education‘ publication and it really hit home:

Some research evidence, along with more anecdotal experience, suggests that students may not necessarily have real learning at the top of their agenda. For example, Nuthall (2005) reports a study in which most students “were thinking about how to get finished quickly or how to get the answer with the least possible effort”. If given the choice between copying out a set of correct answers, with no effort, but no understanding of how to get them, and having to think hard to derive their own answers, check them, correct them and try to develop their own understanding of the underlying logic behind them, how many students would freely choose the latter? And yet, by choosing the former, they are effectively saying, ‘I am not interested in learning.’

Coe goes on to inform us that ‘learning happens when people have to think hard‘. But how do we ensure that learners are both thinking hard, and putting effort into their learning? Easier said than done eh?

download (1)

Here’s some ideas for you to think about using with learners at the start of the academic year:

  1. Teach students about the importance of hard work and effort: Now this is no easy feat. Marzano informs us that this can have a high effect of achievement and suggests sharing examples of personal experiences or those that learners can relate to. He also suggests that learners self-assess their effort in lessons when self-assessing achievement against success criteria – not something I have tried myself, but certainly one to consider.
  2. Establish routines early: For those working in an FE college, most learners are joining your class with no idea as to what to expect. they will be in new surroundings, with new people and this is a great opportunity to establish high expectations in the classroom – Start as you mean to go on! If you have learning activities that require little effort, or if learners are allowed to put little effort in, then guess what? Yes, that will be the routine for the year.
  3. Find out what learners know and use the information: Initial assessment is crucial, but I’m not talking the whole sticking the learners on a computer to complete a maths and English IA to determine… well, not-a-lot. What I’m talking about is finding out what the learners know about your subject. Give them an advanced organiser to help them identify current knowledge and how this fits with information they’re going to learn. Use what they know to help them make sense of new information, to challenge misconceptions and to give a clear direction to the learning that they’re about to embark on.
  4. Organise information: Building on from the above, the more organised the information that learners are dealing with, the better. Provide a range of concrete examples to explain abstract concepts and use both verbal and visual information simultaneously (dual coding) to reduce cognitive load. Cognitive science research also indicates the benefits of revisiting information on several occasions over the term/period of learning (distributed practice) to enhance retention. There are many other strategies that have shown time and time again to be effective – summarised clearly for teachers by the learning scientists (every teacher needs this in their life).
  5. Test learners regularly: As with the above, our memory trace is improved when we have to work hard to retrieve information from long term memory, thus improving retention. Therefore, we should aim to test learners frequently through mini quizzes and self testing. This not only supports retrieval practice, but it also allows both teacher and learners to identify strengths and any misconceptions that learner have, thus allowing for appropriate intervention.

All of the above are simple ‘off the shelf’ strategies that may help to increase the effort and ensure that learners are working and thinking hard in your classrooms. They are not silver bullets and may work better in some situations than others, but all are worth considering – particularly as the new term is about to begin.