By David Petrie
> There is something of a reluctance in printed materials to engage with the idea of the “bad model”. Most books contain model performances of speaking and writing so that the students can look at these and see what their target is, following a product approach to some extent, and perhaps the reluctance lies in the apprehension that the bad model might somehow lead learners astray. However, I have found that learners tend to enjoy working with bad models. It is slightly liberating to see a bad performance and to know that you can do better than that, and in this article I will explore some of the ways we can use bad models in our exam class teaching.
Grammar and Lexis
One of the most common experiences with the bad model is with correction sentences. Typically here the coursebook presents us with eight sentences, some or all of which have errors in that are related to the target language, like this one on countability: “A large number of equipment are needed to camp at the bottom of the Canyon” (Capel & Sharp, 2014). These are generally quite motivating and fun, the learners tend to enjoy trying to ferret out the mistakes and it is the type of activity that can easily be adapted into a more physical or competitive form, by making it into a board race for example.
This principle can be extended though to vocabulary. This is less often seen possibly because it tends to be clumsier in approach – it is necessary to give a contextual sentence in order to make the error more apparent and even then, it isn’t always clear what the mistake might be, as the following examples make apparent:
- “She’s such a sensible child”
- “She never goes outside without a hat and scarf. She’s such a sensible child.”
- “She never goes outside without a hat and scarf. She’s such a sensitive child.”
The target error here is the confusion between “sensible” and “sensitive”. However, sentence one gives no context, so effectively, there is no mistake. Sentences two and three do give a context, but both sentences are effectively correct as the context allows for either choice.
One of the ways correction sentences do lend themselves to vocabulary is with co-text; for example with collocations, dependent prepositions, or when the word is part of a fixed expression, for example:
- He was acutely sensitive to criticism.
- She was very sensitive to other people’s problems
This focus on semantic difference as well as co-text is tested in the multiple choice cloze tasks in the Cambridge exams and giving students versions of this task, where the wrong answers have been selected, can be a useful way of shifting the focus away from questions like “how many did you get right?” to “How many did this person get right and why do you think that?”
Working with Exam Tasks
To an extent giving learners an exam task is an invitation for the learners to focus on the product, or the outcome, namely getting the correct answers. It is more difficult to focus learners on the process of exam tasks and to help them think about how best to approach tasks and why answers are right or wrong.
Using a bad model can help with this. Giving learners a completed example exam task where some or all of the answers, and asking learners to find the incorrect answers and say why they are wrong, forces learners to examine the process of the task in a much deeper way. This can be done to highlight the task focus, to highlight the task instructions, or to focus on language awareness.
In this example of a Keyword Transformation task from Complete Advanced, the error focuses the learners on the rules of the task – not changing the keyword, keeping the same meaning, and using between three and six words: (Brook-Hart & Haines, 2014)
I would often go cycling with my father when I was a child.
My father didn’t use to enjoy taking me cycling with him when I was a child.
With a full set of keyword transformation questions, the issues could be spread out among the questions, rather than grouped together.
This principle can be used with most other exam tasks, particularly with receptive skills, such as text insertion tasks in reading, notetaking in listening, and cloze tasks. Again, the focus is on making the learners more aware of what the common mistakes are and how to avoid them.
The bad model answer comes into its own with productive skills and there is a lot of fun to be had in generating these models and having students review them. First Certificate Expert (Bell & Gower, 2008) has a letter of application task for a position as a lifeguard; and the model I ask my students to review begins as follows:
Hi Mr. Lifeguard Manager,
I’ve got a younger sister and a younger brother and I look after them all the time so I’m really good with children you see. Also I can swim quite well, so I think I’d be super for the job as lifeguard assistant because you can need to swim for that, don’t you? 🙂
There are obvious deficiencies here, most notably in terms of register and text structure, and the rest of the model goes on in a similar fashion, but with more language inaccuracies as well. The main focus though is that of organisation and register and this is what the lesson goes on to look at, using the model as a transformation task and leading learners through the steps to produce a more appropriate formal letter of application.
This approach can be used to focus students on specifics of the assessment criteria, particularly if you notice your class is having issues with one particular area. For example, an otherwise perfectly organised essay might repeat vocabulary and rely only on basic grammatical structures; or a an article might have perfect organisation, grammar and lexis – but only deal with the theme of the question and fail to address the key content points.
As a further way of focusing learners on the assessment criteria, one challenge is to give them the task of deliberately generating a sub-standard answer. The Cambridge English handbooks give the criteria for the marking bands and it is quite a challenge for students to try and create a band one response – it is a great way of helping them to think about the difference between “basic linking words and a limited number of cohesive devices” and “using a variety of cohesive devices and organisational patterns to generally good effect” (Cambridge English Language Assessment, 2015).
As we can do with writing, we can do with speaking. The problem with speaking, however, is that it is ephemeral and it is difficult to generate models for students to evaluate. That said, if you have more than one class, or you can co-ordinate with colleagues who also have exam classes of similar level, then you could record your students attempting a task. To create models that focus on specific aspects of the assessment criteria, it might be fun to try creating these with your colleagues. You can then use these videos to demonstrate inappropriate responses and highlight the criteria that the models are failing at, before your class go on to try and create an improved version of the video. Students can also record their attempts on their smartphones to review in their own time at home.
This isn’t to say that students should only be exposed to bad models of performance. It is obviously useful for them to see what they are aiming for as well as what not to do. There are however, plenty of good models out there and most exam coursebooks have things like a “writing reference” at the back of the book, or there are Cambridge English speaking videos available to look at online. The bad model is liberating though, because it shows students not only what they should specifically avoid doing, but because it can also show them what they can get away with. Try it and see what you think!
Please see the digital version of the IH Journal for references.