Feedback and what good ‘looks like’

I’ve been thinking a lot about feedback lately and reminiscing on my younger days as a sports coach. When introducing a new skill to an individual, it was imperative that I could model, or show an example of what good looks like, otherwise learners would simply not know what they were aiming to achieve.


Learning something new is really challenging, it becomes more so if we don’t know what good ‘looks like’. I’m not an engineer, but let’s take the example of learning a fillet lap weld. Without seeing what a good fillet lap weld looks like, it would be nigh on impossible for a learner to do one successfully. Take the correct use of apostrophes – without seeing the various uses of an apostrophe, one simply wouldn’t know know how to use it.


Just knowing what good ‘looks like’ isn’t enough to learn something effectively however. Along the way to mastering a fillet lap weld, or correct apostrophe use, there’ll no doubt be mistakes made. This is where feedback is essential. According to Ramaprasad (1983, p.4) ‘feedback is information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way’. In other words, feedback should identify the strengths and weaknesses of performance in relation to what good ‘looks like’. But is it that simple?


No. In 1996, Kluger and DiNisi explored the effects of feedback on performance. Their meta-analysis revealed that on average, feedback improved performance but bizarrely, in over a third of cases, feedback actually impeded performance. Upon further exploration, their work revealed that the more effective feedback focussed on the quality of the work (task-oriented), rather than the person (ego-oriented). In other words, focus was on the strengths and areas for development of the work, rather than assigning numbers or grades to the work, which allow for comparisons between learners. In addition to this, they found that more effective feedback focussed on what and how the individual could improve their performance (the future), rather than focussing too much on the performance itself (the past). I liken this to the analogy of driving a car. If we focus too much on what we can see in our rear view mirror, we’ll probably crash (image 1). Whereas, if we acknowledge our mirror, but focus our attention on the road in front, we’re more likely to be moving forward positively (image 2).

Similar findings were noted in the work of Hattie and Timperley (2007); they determined that feedback was best served with clear goals for improvement. If we think back to my above mentioned point about knowing what good ‘looks like’, if feedback is provided in relation to a good example of a fillet lap weld and looks at how current work could be developed to achieve a good standard, then it is more likely that the learner will make improvements.


The thing with feedback is that it becomes extremely challenging for a teacher to provide 20-30 learners with regular individual feedback in a session. Here’s the thing, you don’t need to. Once learners are clear with what good ‘looks like’, there are 20-30 other resources at a teacher’s disposal, so why not ask them to provide feedback to one another?


Some common methods to do this are identified in Petty’s (2009) fantastic Evidence Based Teaching book. One of his diamonds is the ‘medal and mission’ approach – very simple, yet also very effective. Firstly task centred information is provided to the learner in relation to the goals (what good ‘looks like’) – the medal. Following this, learners are given a clear target for improvement in relation to the goal – the mission. For example:


‘Jamal, you have clearly fit-up the plates accurately and your weld indicates that the distance to the joint was good, as the arc is the correct depth (medal). If you look at the model example, the bead size is slightly larger. To increase the size of the bead, you need to decrease the speed that you move along the joint. In your next attempt, continue in the same manner as before, but with a slightly slower speed’ (mission).


Similar approaches that may be used include:

  • 2 Stars and a Wish – useful for peer assessment, the learners give one another 2 stars (i.e. 2 things they think their peer has done well in relation to what good ‘looks like’) and a wish (i.e. something they wish could be improved upon in relation to what good ‘looks like’).
  • WWW/EBI – as before, this acknowledges the past – What Went Well (in relation to what good ‘looks like’), before looking to the future with clear guidance for improvement, Even Better If…(in relation to what good ‘looks like’).


Whilst peer feedback is really useful, it is worth noting the limitations of the above approaches. Indeed, Nuttall (2007) acknowledges that around 80% of feedback in a typical classroom is between peers, yet around 80% of that feedback is inaccurate. If we can provide suitable structures, such as the above, and ensure that clear success criteria is provided (what good ‘looks like’), then we improve the effectiveness of peer to peer feedback.


To summarise, if we really want to maximise feedback in classrooms, we need to ensure the following:

  • Everyone is clear with what good ‘looks like’
  • Feedback looks forward and not back
  • Feedback focuses on the task and not the person
  • Feedback involves everyone



Hattie, J. and Timperley, H. (2007). The power of feedback. Review of Educational Research. 77 (1), p. 81-112.

Kluger, A.N. and DiNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), p. 254-284.

Nuthall, G. (2007). The Hidden Lives of Learners. NZCER Press

Petty, G. (2009). Evidence Based Teaching. Cheltenham: Nelson Thornes.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.


Questioning questioning 

Since Geoff Petty shared his ‘which questioning‘ strategy with me around 6 years ago, I have been on a mission to hone my questioning. It is a great little activity that really gets you thinking about making effective use of questions. To this day, I use an adapted version of the activity with my own trainees. Indeed, I often focus observation feedback on the development of questioning as an essential formative assessment approach.


It’s easy to see why this is the focus of many teachers up and down the country. Hattie’s synthesis of classroom experiments (2015) found questioning to have a modest, but positive effect size of 0.48 and the resulting classroom discussion a huge 0.82.

The thing is, I’ve found more and more that trainees are focusing too much on questioning individuals (they do it well), and less time on the instructing or allowing learners to practise. It seems that ‘the question’ has taken precedent over ‘the answer’.

I observed a session recently where the teacher insisted on working their way around the class with questions, yet many of the learners didn’t have sufficient prior knowledge to allow them to explore understanding through discussions. It appeared that the opportunity cost of such a strategy was not as fruitful as one might have thought. Due to questioning being a strategy held in high regard, I can understand why they persisted, but it just didn’t help the learners. Instead, the group lost interest rather quickly and low level disruption ensued.

Were the teacher to use questioning more efficiently (second time I’ve used this term in as many posts), through a selection of multiple choice questions which can be answered by all in a short time, the teacher may have realised that the learners required some input/guidance to increase knowledge and enable greater participation in discussions.

Arguably a good starting point for thinking about questioning in the classroom is to ask yourself what the purpose is. Is it to assess learner knowledge/understanding, or is it to teach learners something through discussion? Perhaps it is both, but the main reason should influence the type of questions used. Personally, I use questioning as an assessment tool and the quicker I am able to assess ALL learners the better, so that I can identify gaps in knowledge that need filling. I’m not dismissing questioning as a means to generate good class discussion, but appreciate that time is of the essence with our learners and we should aim to maximise every last drop of it.

ResearchEdFE – Oliver and me

Last week (03.12.16), Oliver and I delivered our ‘Choose Science, Not Myths’ presentation at the first ResearchEd devoted to Further Education.

Below are the slides from the presentation and Oliver kindly put together the presentation notes in his blog here and here.

The first part of the presentation explored a range of myths and while it is acknowledged that the jury is still out on some of these, it is important to remember that we were attempting to be contentious in order to spark debate. The second part of the presentation explored a range of effective learning strategies which are supported by both classroom experiments and cognitive science.

Own your room

Not all teachers have the luxury of their own classroom; many have to move from room to room for their lessons, but regardless, one of the biggest pieces of advice that I give to teachers when managing classroom behaviour is to OWN YOUR ROOM.

When I started out teaching, I would often arrive at my classroom 2 minutes before the lesson to find students already in the room, sometimes eating/drinking, on phones, generally treating the place as a common room, rather than place of learning. This put me on the back foot as a teacher. I couldn’t arrange the tables as I saw fit, so would try to involve the learners in moving the room around (mayhem). Then trying to get them to sit where I needed them became even more of an issue. I had to start negatively by enforcing rules that learners should have been following; “put your drink in your bag”, “put your phones away”, so getting learners focussed on the lesson became difficult. Basically, I was taking part in unnecessary battles, when I should have been inspiring learners to learn about my subject. So after a terrible first year, here’s what I started to do – I owned my room. Below I have put some simple strategies that can help you to do the same:

  1. Where possible, arrive at your room before your learners and if they are in the room before you for whatever reason, ask them to leave whilst you set up. Do not work around them in your classroom – even if it means delaying the lesson start by a few minutes until you are ready.
  2. Where possible, set the room layout differently to last time (or try to vary at least a little with a different seating plan). Learners get comfortable very quickly and as soon as they take control of a seat, it’s very difficult for a teacher to gain your classroom control back. In addition to this, research by Smith (1985) has demonstrated the benefits of multiple learning environments on memory. Whilst not a completely different environment, the variation in position in the room, may result in less environmental cues used for memory.
  3. Welcome every student at the door. This not only sets a positive tone for the session, but it also allows you to prevent any misdemeanours prior to them entering your classroom. At this point, you can also start to direct them to where you want them to sit. “Morning Kye, please sit there” (Note: I have not asked Kye if he would mind sitting there, but have told him politely).
  4. In most instances, I’d suggest that you begin the class swiftly with an overview of the expectations for the session. That way, there will be no surprises along the way. “Here is what we are doing today and this is what I expect from you”. Further to this, according to Marzano (2003, cited in Petty), the use of ‘reminders’ has a 0.64 effect-size on achievement and is a useful strategy for developing student-teacher relationships in the classroom. This sense of clarity with expectations for learning is supported further by the work of Wiliam on formative assessment. Thus starting most sessions in this way is desirable.
  5. Recap prior learning so that students can draw upon what they already know about the topic. Supported by a wealth of cognitive psychology research, low stakes testing offers a multitude of benefits. Not only does it allow for initial assessment to take place (if done properly*), but it also allows for learners to take part in retrieval practice. This is a low-cost, high impact strategy to support learner acquisition of knowledge, which can be built upon as the lesson progresses. In terms of behaviour, this will provide a routine for learners and even the most challenging like routine.
  6. Try to avoid large group work. When it comes to group work, then anything more than groups of 3 and I start to worry about the benefit to all involved. Slavin, Hurley and Chamberlain offer two key features of effective group work (working towards the same goal and having accountability for one another’s learning), but even so, it becomes very difficult for a teacher to manage large groups. I tend to stick to paired activities in the main, but that’s my preference. If you can be confident that all members are participating fully and are getting the most from their experience in the group (and I’m not talking ‘soft skill’ nonsense), then fine, but larger group size does create the conditions for behaviour to go awry. My ‘go-to’ strategy is think, pair, share. A great post on the strategy by HeadGuruTeacher can be found here and in using it well, the teacher maintains their control, thus their ownership of the room.


There are many more ways of owning your classroom, but I generally offer the above 6 tips for my trainees to enable them to then make decisions based upon their own contexts. I haven’t discussed classroom rules, rewards or punishments, because there’s a whole blog post in that, but these are just simple strategies that can be adopted with relative ease. If you find learner behaviour a struggle, then perhaps try owning your classroom.


* For effective initial assessment, consider using multiple choice questions, along with a whole group answer approach, whereby mini-white boards, individual hand held devices, or simply fingers up, is used to determine each learner’s starting point. Do not resort to the ‘asking an open question and only the most confident shout out’ approach.

Should we spend more time designing multiple choice questions? a) Yes b) No

I am the first to admit that when I plan my lesson resources, I spend far too long making them aesthetically pleasing. Of course, I try to ensure that my instructional design is efficient and the content is challenging, but I enjoy making the resources look great too. There are probably many others, just like me.


In this post I’d like to look at formative assessment design, specifically multiple choice questions (MCQs), and I will argue why I need to spend more time focussing on designing these and probably less time on how ‘funky’ my resources look.


Let me explain the benefits* of multiple choice questions before I go on to how you might approach the design of them:

  1. The ‘testing effect’ – Frequent quizzing has shown to enhance later exam performance (McDermott et al, 2014) as learners are provided with the opportunity of retrieval practice. There is, however, research that indicates that MCQs might not be as effective for retrieval compared to short answer response questions due to the answer being available, thus learners are not required to think as hard (Kang, McDermott and Roediger, 2007).
  2. Identifying gaps in knowledge – We can quite reliably use multiple choice tests to identify the gaps learners have in their knowledge. We can use this information to close the gaps in knowledge for groups or individuals. Whilst there is the argument for guessing answers, if we increase the number of plausible incorrect answers and the number of questions to respond to, we do increase reliability. This is illustrated in the work of Burton et al (1991):
  3. 00001Furthermore, when we ask questions to learners in class, each learner is usually asked something completely different and therefore, this results in completely different answers. Whilst I think questioning is useful, it isn’t a reliable measure of learner understanding, plus we only know the response of that one learner we ask, not the others.
  4. They can be used as a diagnostic tool – If the questions are written as such that we can determine why the learner is selecting a particular answer, then we can start to diagnose problems with their cognitive processing (Wylie and Wiliam, 2006). Wiliam advocates using a small number of these questions at a hinge point in the session, whereas I’d argue that they’re useful at any point. The problem is tha they are quite challenging and time-consuming to create, as Harry Fletcher-Wood and others have found. I’m still developing my thinking on these, but here is a video by Wiliam which explains it in a little more detail – “Kids cannot get it right for the wrong reason”.
  5. Quick, visible responses  – When the learners answer these questions, we need to be able to see the responses of all individuals. This can be done quickly and with ease through the use of mini whiteboards or holding fingers up. The teacher can view the whole class in a few seconds and determine whether they can move on, or revisit information accordingly. The wealth of formative assessment online tools also allow for MCQs to be administered to all, and automated marking can generate clear analytics quickly. I personally like Google Forms, but there are many other online tools offering similar features.


Designing the Stem

In designing effective MCQ’s I have found several research articles and documents (1,2,3,4). Each of which offer similar advice when writing questions and answers. Here are some of the key things to consider when writing the stem:

  1. The stem should be meaningful by itself and should include the main idea.

Basically, this means that the main thing you’re trying to find out about should be in the stem. E.g. What chamber does deoxygenated blood enter in the heart? (I am trying to find out if they know about chambers of the heart).


  1. The stem should not contain irrelevant material.

This will just serve to confuse learners and this can cause more harm than good. Learners may create ‘false knowledge’ if the information is not relevant.


  1. Avoid a negatively written stem.

Where negatives are used in the stem, this can make the question easier according to Harasym, Price, Brant, Violato, and Lorscheider (1992). Furthermore, negatives can cause ambiguity in what is being asked and just because a learner knows an incorrect answer, this doesn’t mean that they know the correct one. Here’s an example by Burton et al (1991).

Which of the following is not true of George Washington?

  1. He served only two terms as president.
  2. He was an experienced military officer before the Revolutionary War.
  3. He was born in 1732.
  4. He was one of the signers of the Declaration of Independence?


Designing the Answers

In a meta analysis of MCQ research, Rodriguez (2005) informs us that it is not the number of distractors but the quality of distractors that are important when designing answers for MCQs. It was found that writing more than 2 distractors can become challenging and is not significantly more effective that having more, so having A-C is fine if you are struggling to produce A-D. The key elements that should be addressed when writing stem answers are:

  1. All alternatives should be plausible.

Essentially, each incorrect answer should be plausible. In the example below, there is clearly one implausible response:

In what year was Winston Churchill first chosen as Prime Minister?

  1. 1700
  2. 1940
  3. 1941
  4. 1942


  1. Alternatives should be stated clearly and concisely.

Try to avoid unnecessary ‘waffle’, so that in interpreting the question, the cognitive burden is reduced.


  1. Alternatives should be mutually exclusive.

There should not be more than one answer that can be defended as a correct response by using correct reasoning. An example by Burton et al (1991) shows two possible correct answers:

How long does an annual plant generally live?

*a. It dies after the first year.

  1. It lives for many years.
  2. It lives for more than one year.

*d. It needs to be replanted each year.


  1. Alternatives should be free from clues about which response is correct.

Avoid including a word from the stem in the answers. This can provide a clue to the answer, and for some, they may think of it as a trick question, thus go with an alternative answer. For example:

What muscle is the agonist on a bicep curl?

  1. Bicep
  2. Deltoid
  3. Hamstring
  4. Tricep


  1. The alternatives “all of the above” and “none of the above” should not be used.

Speaks for itself as more often than not, this option is the correct answer in MCQs.


  1. The alternatives should be presented in a logical order.

The best approach suggested is numerical or alphabetical to avoid any clues as to which is the correct response. When working with City and Guilds on a project a few years ago, I was also advised to avoid having the starting letters of each answer show an obviously different response. For example, in the first set of answers, Hungary clearly stands out and this might lead learners to respond with that, whereas in the second set of responses, each starting letter is different and leaves no clues:

  1. Germany                 a. Germany
  2. Ghana                      b. Hungary
  3. Greenland               c. Poland
  4. Hungary                   d. Russia


As can be seen, writing MCQs isn’t something you can throw together 5 minutes before a lesson. To make them effective, it requires time and a number of elements need to be addressed. I’d suggest working with your colleagues to build a bank of questions.


*I have stumbled across a couple of problems with MCQs which are worth examining. @surrealyno has written a short piece on the disadvantages of MCQ’s and Roediger and Marsh (2005) also found that using MCQs could lead to ‘false knowledge’ in some students, where they believe an incorrect answer to be true. With due consideration of the abovementioned points, it certainly will mitigate against some of these concerns and I’d argue that the advantages of using MCQs outweighs the disadvantages, particularly compared to alternative methods of assessment.


Lesson objectives – what are we measuring?

“The lesson objectives could have been written a bit more measurable.”


My questions I want to answer in this blog post are the following:

  • What does that statement mean?
  • Do learning objectives need to be written so that they’re measurable?
  • How should we write lesson intentions to maximise learning?


Or should I set myself an objective? By the end of this post I will:

  • Be able to identify what a measurable lesson objective is.
  • Be able to analyse the impact of measurable objectives.
  • Be able to identify different methods of writing lesson intentions.


The comment in the title was made during a recent joint observation. Whilst on the surface it appeared to make sense, upon reflection, I’m not convinced by it and would like to explore it further.

Current educational ideologies (particularly in vocational education) lead to a primarily product based curricula, whereby meeting behavioural objectives forms the basis of our teaching, with teachers accountable for and judged on their ability to produce results as opposed to the more developmental, process based curricula. I don’t necessarily have a problem with this, in fact, I’m broadly in favour of this type of curricula,  but whatever curriculum approach is chosen, I do have a problem with being told that lesson objectives/outcomes/intentions/anything else you want to call them, should look a certain way. Many in the FE and Skills sector (perhaps education more generally) see learning as a linear and singular process of moving learners from A to B in a lesson and once they’ve achieved a desired outcome, then it is assumed that the learners have learnt. This view is wrong. Learning is liminal (particularly in post-16 education), with learners in a continual stage of development towards a longer term outcome over a series of lessons. I’ve said before that if ever a definition of learning could be agreed, it would certainly involve something about knowledge acquisition and probably something to do with long term memory and being able to retrieve information. Therefore, learning does not happen in isolated lessons.

 As David Didau (2015, p.279) notes:

all too often our learning intentions are lesson menus: here is what you should know or be able to do by the end of today’s lesson. Unless we have very low aspirations for our students, they are unlikely to do more than merely mimic the understanding or expertise we want them to master.’ David goes on to say thatif we were to share our intention for students to learn threshold concepts, then we could tell them that it might take them weeks to wrap their heads around such troublesome knowledge’.

This is also supported by Hussey and Smith, who argue that:

‘learning outcomes cannot be defined with the kind of precision that has been supposed, that they stand in need of interpretation within a context… The idea, currently popular—that first year degree students must describe, second year students must explain and evaluation should characterise their work in the third year—must be replaced with the idea that these activities are visited and revisited as the students’ progress and in accordance with the requirements of the subject matter.’

Indeed Hattie (2012) cites a very low (0.12) effect-size for behavioural objectives. With the aforementioned in mind, the whole notion of measurable objectives in a single lesson is beginning to look absurd. 


According to Hattie (2012) however, there are five essential components to learning intentions and success criteria to support effective learning, these are: challenge, commitment, confidence, high expectations; and conceptual understanding. I have said before that I’m not convinced about success criteria here, but in Wiliam and Thompson’s (2007) work, they hold the work of Wiggins and McTighe (2000) in high regard. This work advocates a two-stage approach to creating and sharing the learning intentions with learners. This includes clarifying the learning goals (what is worthy and requiring understanding?), and establishing success criteria (what would count as evidence of understanding?). Perhaps it is the success criteria where things become measurable?


I think about my own practice as a teacher trainer. If I were to teach say, formative assessment and set my objectives as:

‘To understand the 5 key strategies of formative assessment’ 

This is certainly worthy of understanding. Then if I were to make the success criteria measurable:

‘learners will be able to identify 5 key strategies for formative assessment’

 ‘learners will be able to explain the 5 key strategies of formative assessment’.

Whilst this is measurable and learners may be able to do both of these by the end of the lesson, this would be merely a performance and not learning, moreover, there may be learners that can critically analyse 3 strategies (beyond the success criteria) and know very little about the other 2. Does this mean that the lesson has failed? Of course not – it all seems rather short sighted and restrictive.


Fuch and Fuch inform us that:

‘teachers may prefer short-term goal measurement because it is easier to understand and it guides instruction more directly by providing information about when to progress from one skill to another [however] short-term goal measurement may be misleading: While students master a series of instructional objectives, progress on more global indices of achievement may be limited, failing to reflect this gain’.

For this reason, I ask myself whether learners would benefit more from a question or testing a hypothesis over a series of lessons:

What makes formative assessment effective? (question)

Formative assessment is only effective when feedback is provided (hypothesis)


With something like the above, the lesson intention is broad in a sense that it allows for a range of outcomes in the lesson and over a series of lessons, but tight enough to focus the learners on the content and be clear with what they’re learning about. Clarity is key. I believe that learners should know what they are doing in the lesson and why they are doing it. I mean you wouldn’t bake a cake without knowing the kind of thing you’re after and you wouldn’t go on a journey without knowing the destination, but does the way you write this on your lesson plan or white board really benefit anyone? It becomes a tick box approach – something we need to move away from in education.


Oh by the way, did we all meet the lesson objectives?


Unlinked reading:

Didau, D. (2015). What if everything you knew about education was wrong? Carmarthen, Wales: Crown House Publishing Limited.

Hattie, J. (2012). Visible Learning for Teachers: Maximizing Impact on Learning. Oxon, UK: Routledge.

A year in the making…

A year ago I restarted my blog as a way to find my voice again. Blogging really has helped me to develop as a practitioner. Not only does it allow me to reflect on my practice, but it also means I have to read A LOT! I’m not complaining, but boy have I read. Reading more has given me more knowledge about education related topics. More knowledge has allowed me to be more critical of these topics and this has supported me to improve as a ‘research informed practitioner’. Moreover, I’ve been able to support others with accessing the research by writing about it in layman’s terms.


Upon starting my blog again, my intention was to write a blog per week, but I fell short of my target with a mere 49 (including this one). I’ve had nearly 12,000 views (which is a drop in the ocean compared to some, but I’m happy with it for the first year). This coming year will see me exceed my goal, with a weekly post, in addition to a new feature (coming soon). Furthermore, I am branching out to other social media platforms to increase the views. This isn’t about numbers so much, but about creating more dialogue around my posts and supporting others to access crucial information about learning.


In celebration of the 1st year, I’ve chosen a selection of key posts that I feel have had the biggest impact on both myself and others. Some have been popular and well read, others not so, but all very valuable:

  • My first blog postLess haste, less speed‘ – this was the start of my new blog and developed upon a theme that I had written about in a previous blog. The post questions why we are always in a rush to teach learners information and for them to make quick progress with it. This set the scene for the blog and it has been viewed 234 times.
  • A lot of effort went intoSchemes that make a difference‘ – in this post my aim was to support teachers with their planning by drawing upon a sound research base. I wrote this alongside a training session that I was planning. The post and the training session has been cited by many as being really useful to them. This post has been viewed 672 times.
  • My most popular post ‘Formative assessment – is it a silver bullet‘ – I think I almost broke the internet in the first 4 hours of it being published, with over 400 views. It drew upon research to provide a critical analysis of Wiliam’s 5 key strategies to formative assessment. Whilst my views have changed slightly, the post is one of my favourites and an easy read for understanding how to do formative assessment well. It has been viewed 1023 times.
  • My famous postObservations – is the boot on the wrong foot‘ – This was published and within a couple of days, the TES FE editor contacted me to feature it in the paper for the following week. It is an alternative view of observation based upon my experiences with my daughter. Viewed 464 times.
  • The most dialogue generated post Action research: A recipe for disaster?‘ – This post generated a lot of discussion on Twitter and WordPress (well, for me anyway). I don’t necessarily agree with everything I say in it, but intended on it being thought-provoking and a little contentious in – which it was. It has 479 views.
  • Most useful to trainee teachers postApplied and simplified – Top 20 principles‘ – In my role as a teacher trainer in FE, I work with individuals that have fantastic subject knowledge, but lack pedagogical knowledge AND the time to separate the wheat from the chaff. Therefore, it is my job to support them in accessing key information, and what better than summarising a key piece of research from cognitive psychology. I have actually got my trainees doing a similar task (summarising key research) and feel that this is essential to their development. Viewed 174 times.


So stay tuned to my blog because it is only going to get better – here’s to another year!

Applied & Simplified Pt 3 – Top 20 Principles

My previous two posts here and here have explored the first 15 principles from the  Coalition for Psychology in Schools and Education’s  Top 20 Principles from Psychology for Teaching and Learning. Here I move onto the final 5 principles, once again, trying to provide simple application of each.

Principles 16-17 focus on how the classroom can best be managed.

Principles 18-20 focus on how to assess student progress.





When I was a youngster, my nan collected her spare change in a huge glass bottle for me. At the time, I think the bottle was probably my height – it was huge and made of thick, clear glass. Every so often she would allow me to pour the contents of the bottle out and count it. This was the fun part!  Once the money had been counted, the arduous task began. This involved getting the coins back into the bottle; grabbing a handful at a time and slowly releasing them into the bottle neck. The main bottle could hold what seemed like endless amounts, but getting the coins in was no easy task.


The more I did it, the more I realised that if I collected the same coins together and put them into small piles, the more efficient I could become as they would slide in smoothly, rather than attempting to drop a load of random shaped and sized coins in, which would fight to get in through the bottleneck.


In 1956 George A Miller asserted that our capacity for processing information is limited to seven, plus or minus two pieces of information. This later led to the working memory model by Baddeley and Hitch. Essentially, the working memory (WM) is the narrow bottleneck to the huge long term memory we have. The working memory can only handle a limited amount of information at one time (much like the bottleneck can only handle a limited amount of coins) and therefore, the more efficient our methods of teaching are, the more we are likely to minimise ‘overload’ in order to aid long term memory (the endless bottom of the bottle).


Chunking information for learners seems an obvious way to do this, doesn’t it? How many of us do though? I am certainly guilty of trying to cram lots into lessons from time to time, leaving learners bamboozled and actually causing me more work later down the line. Here’s some ideas as to how you might ‘chunk’ the learning to support learners in processing information more effectively in lessons:


1. Firstly we need to understand what our learners already know. If we can link the new information to this, then we can reduce the burden on WM. Using multiple choice quizzes at the start of lessons can provide you with some information on this. Furthermore, knowing other things about your learners is always useful for analogies and metaphors.

2. Secondly we should try to chunk information so as not to burden the WM of learners (we can do this best following the above). This might include:

  • organising key concepts visually for learners in advance of the teaching (advanced organisers). For example, showing how the concepts/components of a topic relate to each other to form the whole.
  • breaking concepts down into their component parts (chunks) for delivery. For example, breaking a skill down into its simplest form before building each part together once mastered.
  • using mnemonics – further information can be found in a previous post here
  • using analogies and metaphors to help learners to link new information to prior knowledge. As mentioned above, the more we know about what our learners know, the more we will be able to link new learning to it. More information can be found here
  • using visual representations of things being explained, so that both the visual (visuo-spatial) and the auditory (phonological) information can ease the burden on the WM. See further information here

3. Finally, we need to be conducting regular formative assessment to ensure that we are monitoring the learner’s WM. We can then determine whether further support is required to address misconceptions, or whether we can move forward with additional learning. A post on formative assessment can be found here.


So when attempting to maximise the impact of your teaching, try thinking about getting a load of coins into a bottle*

*The astute of you may have noticed what I’ve done in this post…

Marking musings

The recent ‘marking madness’ article in the Guardian got me thinking yesterday. It’s funny because marking policies are somewhat paradoxical, whereby we (the sensible ones) know it’s nonsense to be using different colours/stamps/stickers etc, whilst also realising how meaningless marking each and every piece of work and evidencing every bit of feedback is. However, all it takes is that one individual (usually more senior) to say “don’t you think it’s important to give feedback?” Or “how will the learner remember it if you don’t write it down?” Then there you have it, you’re made to feel guilty about not going over the top with marking and a mad marking policy seems well justified.


I try my damnedest not to mark work. Not because I want my learners to fail, or because I don’t value giving them feedback, but because I want a life. I give loads of verbal feedback in class, they get some from one another and have the opportunity to reflect often – this is not documented. I set up automated feedback for any homework (multiple choice questions via Google forms and Flubaroo). I frequently use the information gathered from results of this to plan my next lesson (i.e. where there are common misconceptions found, I ensure that these ‘gaps are filled’ in the following lesson). Failing that, I get the learners to feedback to one another using online forums, but not on my time – that’s homework too. I then have a browse to give me information on their understanding and use it in subsequent lessons. Other than this, the odd draft here and there and the standard summative assessments that have to be marked and there you have it, my own marking policy – I have autonomy.


I know that it’s slightly different working at a college to a school, but why should it be? Learners do not need or benefit from excessive marking. There comes a saturation point surely where marked work with different actions to be taken in order to progress is so frequent that they don’t end up progressing.


We know that feedback can be powerful just look at the work of Hattie and Timperley, Wiliam, and Kluger and DeNisi to name a few. However, Kluger and DeNisi suggest that if feedback is in the absence of prior goals, it is less likely to have a positive impact. When thinking about outrageous marking strategies, I’m skeptical as too how much of the feedback/ marking is linked to learning goals and intentions, simply because it becomes mechanistic, a process whereby teacher’s feel obliged to put something, regardless of its meaning. Hattie highlights further problems with marking, informing us that:

‘When feedback is given in writing, some students

  • have difficulty understanding the points the teacher is trying to make
  • are unable to read the teacher’s writing
  • can’t process the feedback and understand what to do next.’

So basically, marking is good, but ain’t all that! When you’re made to feel guilty about not marking enough, try to think of the impact on the learner, not the impact on the big box that needs ticking. You’re a professional.


On the topic of coloured pens, I’ve never understood it. I literally pick up the first pen in sight (*note to self: whiteboard markers aren’t good on paper). Don’t even get me started on the purple pen of progress!