- IH Journal - https://ihjournal.com -

Assessing Speaking

Assessing Speaking

Sari Luoma

Cambridge University Press

 

Reviewed by Nick Kiley, IH Hanoi

Several years ago, after having been teaching for a short amount of time, I was sent to a company to do some placement testing of the spoken language of various soon-to-be-recipients of English classes provided by our school. There were several STBRs, so two teachers were sent to the offices, where we spent a happy afternoon listening to people’s accounts of their families. However, disaster soon struck, as it transpired that a representative of the company had snuck into both tests, and received two, albeit slightly, different results, and was demanding to know why. That got me to thinking about what the most effective ways of assessing speaking were, and is something I pondered even longer as an Academic Manager. Next time, I’ll throw a copy of this book at the company rep… and not in the metaphorical sense…

In my time teaching English, I’ve been asked / told (mostly told) to carry out speaking tests in a variety of ways, almost all different, and almost all unsatisfactory to me. So, I was quite hopeful as this book winged its way toward me that I might have finally found some answers to my questions. I think I was barking up the wrong tree (albeit a tree that had been pulped and transformed into written matter). I should have, of course, realised I was never going to find a magic answer in my search for a perfect speaking test, and indeed what I found was a theoretical discussion of all things related to testing speaking ability.

This is not a book for those looking for some ideas for speaking tests to give to their students. This is a book for those looking into the complexities of testing speaking, the theory behind it and a discussion of the merits of various ways of testing. It’s a very detailed, and heavily referenced, book that does not make for light bed-time reading, and is certainly not for the Academic Manager to grasp as they fly off to a last-minute speaking test with the hope that in the taxi they’ll be able to find a couple of barnstorming ideas. For those sitting down to begin their DELTA extended assignment (is that still happening?) or write that paper on speaking assessment for the conference, this could be worth dipping into (and I do mean dipping – I tried to swim the full length in a couple of evenings, and my brain is still protesting…).

We start off with a discussion on applied linguistics and the problem of defining what constitutes ‘good speaking’. What do we focus on…grammar? Pronunciation? Do we tend to look at grammar in its written form, ignoring the differences with spoken grammar? And for me, therein lies the rub. What is good / bad speaking? I know several people, some teachers, who interpret this in different ways. So, I soldiered on through the discussion about what speaking involves, for example ‘meaningful interaction’, and began to start critiquing the various speaking tests I’d utilised for their varying degrees of artificiality.

Next, a look at the decisions that need to be made when creating tasks to test speaking. Again, we’re in the realms of theory and the discussion is in danger of wandering off without us. However, we catch it up and find ourselves at the next decision making stage. We’ve discussed what good speaking might be, and we’ve asked searching questions about what the form of the task will take, now we need to think about scoring criteria. Having agreed that it is difficult to define ‘good speaking’, we now concern ourselves with the difficulties of describing spoken language in short descriptors. The author attempts to overcome this problem by looking at several current examples and discussing each in turn, before looking at the criteria for developing these scales. To me, this again felt a discussion to the problems faced without any movement forward in terms of answers.

We are then led through discussions about the models of language we reference test results against, relating these to test design and again looking at examples. We then deftly move to a look at how we design the specifications for our speaking test. Here I found one of the most useful parts of the book, with a list of questions for the consideration of those who are writing ‘construct specifications’. (Don’t ask – this book doesn’t go light on the jargon). I felt the book could have benefited from more ‘checklist’ style analysis of test design. I suppose I could make my own checklist from this list of questions, but in terms of putting the theory into practice, these could have been useful.

We move on to look at different types of task and the issues involved with speaking task design, but I again found this overly theoretical and in need of something more. A look at reliability and validity then leads to a brief look at alternatives in assessing speaking – the section I’d been waiting for – which was disappointingly short.

All-in-all, I found the book a little unsatisfying. I felt there was a lot of theoretical discussion, with an in-depth discussion of the problems of assessing speaking, but I was hoping, maybe a little ambitiously, for a little more in the way of suggested answers. Several seeds were certainly sown, and some parts of the book got me to thinking about the way tests are conducted in my current context, and potential changes that could be made, but this is not the book for a busy Academic Manager looking for some quick fixes. I feel it would be a worthy addition to a DELTA library, or to someone looking in much more depth into the complexities of testing speaking. It’s certainly more academic than what I was looking for, and is punctuated with several diagrams of the kind that have arrows pointing in various directions, but are hard to follow. I think I will send a copy to the company rep and ask her to see for herself why two completely accurate and identical test scores were not achieved.

Similar Articles: