In association with Pilgrims Limited
*  CONTENTS
--- 
*  EDITORIAL
--- 
*  MAJOR ARTICLES
--- 
*  JOKES
--- 
*  SHORT ARTICLES
--- 
*  CORPORA IDEAS
--- 
*  LESSON OUTLINES
--- 
*  STUDENT VOICES
--- 
*  PUBLICATIONS
--- 
*  AN OLD EXERCISE
--- 
*  COURSE OUTLINE
--- 
*  READERS’ LETTERS
--- 
*  PREVIOUS EDITIONS
--- 
*  BOOK PREVIEW
--- 
*  POEMS
--- 
--- 
*  Would you like to receive publication updates from HLT? Join our free mailing list
--- 
Pilgrims 2005 Teacher Training Courses - Read More
--- 
 
Humanising Language Teaching
Humanising Language Teaching
Humanising Language Teaching
SHORT ARTICLES

How Real is the Language in Grammar Tests?

Ilana Salem, Israel

Ilana Salem is a high-school EFL teacher and teacher educator in Israel. She holds an M.A in Applied Linguistics and TESOL form University of Leicester. Her research interests include: EFL-teacher language awareness, use of corpora in lexico-grammatical analysis, and classroom testing.

Menu

Introduction
Co-text compatibility
Real life usability
Mental processing
References

Introduction

Do you teach EFL (English as a foreign language) in a non-English speaking country? Do you give traditional grammar tests and quizzes? Then, you might find this discussion helpful. In some non-English speaking countries, the EFL learners’ encounter with naturally used English is rare. For such students, any exposure to ‘real’ English is a valuable language learning opportunity. In this article I show examples of test items which resemble ‘real’ language, as opposed to their counterparts, which represent extreme cases of stilted and artificially created sentences. The examples were collected from tests given to Hebrew-speaking primary and high school students in Israel. These are discrete point items, which appeared in tests with no additional context or illustrations.

This article aims at equipping the EFL teacher – classroom-test writer - with tools for evaluating her own test items in terms of their language authenticity (realness). ‘Authenticity’, as used in testing literature, normally refers to ‘the degree to which test materials and test conditions succeed in replicating those in the target use situation’ (McNamara 2000; 131). I have narrowed this concept by focusing on a few aspects of language authenticity, as introduced below. I will draw your attention to particular sentences by using bold letter type, and by preceding them with bracketed numbers. Of our interest will be not only the prompts ‘p’ (the input to be manipulated by the test-taker), but also the target responses ‘tr’ (the language which the test-taker is supposed to produce). will indicate a positive example of language authenticity. Let us now consider the following test items.

i Change into Past Tense.
p (1) The house can be clean.
tr The house could be clean.

i Write the verbs in the correct tense (Active or Passive).
p __________the cars __________ (buy) already?
tr (2) Have the cars been bought already?

Would you consider sentences 1 and 2 highly frequent in the English language? Presumably, computerized data (O’Keeffe et al. 2007) could provide empirically supported information about their statistical markedness, or uncommonness; but since many teachers lack convenient access to corpus data, they can use their linguistic awareness and intuitions to judge these extreme cases of statistically infrequent language strings. My ‘gut-feeling’ is that it is not easy to imagine any linguistic or situational context in which they would be produced. Now let me suggest two characteristics of (grammatical but) not very useful sentences: insufficient co-text compatibility, and low real-life usability.

Co-text compatibility

This feature refers to the ‘cohesive’ potential of the test item. In discourse analysis, text cohesion is characterized by ‘ties and connections which exist within texts’ (Yule 1996; 125). For example, can items 3 and 4 be conveniently fitted into any wider linguistic co-text? In other words, what could possibly precede or follow such utterances in real language?

i Ask wh-questions about the underlined words.
p (3) Five monkeys were in the zoo last year.
tr Who/What …, Where …, When …

i Choose the correct form.
p Oh, look! Somebody (breaks, breaking, has broken, had broken) the window.
tr (4) Oh, look! Somebody (breaks, breaking, has broken, had broken) the window.

It seems that the cohesive potential of 3 and 4 could be increased by some alternations. One possible way of fixing 3 would be by adding an initial ‘the’, which would form reference to some previous mention of ‘monkeys’. In 4, the second sentence should begin with ‘the window’, which is the focus of the speaker’s attention, rather than ‘somebody’, which does not carry much meaning or reference. Alternately, this item could be replaced by: ‘Oh look, Tommy has built a tower.’

Items 3 and 4 can be contrasted with 5 and 6 below. Sentence 5, in spite of being a discrete point item, can easily fit into a conversational co-text; it could have been preceded by something like: ‘Is it true (that…)?’. Item 6 is embedded in its own co-text and thereby the co-text feature has been accounted for.

(5) Of course it’s true. I saw it (alone, lonely, in myself, myself).
(6)
What would be said in the following situations? Use modals.
You would like to borrow your friend’s dictionary.
You say: “_____________________________________”

Real life usability

In addition to their co-text harmony, items 5 and 6 have high real-life usability. This means that they are practical communicative tokens. Other examples of highly pragmatic sentences are the multiple choice items 7 and 8:

(7) My parents (were, wasn’t, had, was) very happy to see us.
(8) She hasn’t phoned, (true, is she, has she)?

On the other hand, some test items present or elicit sentences which are not pragmatically useful. For instance, could you think of any non-pedagogic situation in which sentences 9 and 10 would be said or written? Let us admit that it is not an easy task.

i Write the verbs in the Present Continuous.
p You _____________ (open) the door.
tr (9) You are opening the door.

i Translate Hebrew to English.
p hadelet chayevet lehina’el ksheta’azvi et habayit. (In the original Hebrew script was used.)
tr (10) The door must be locked when you leave the house.

So, we have seen that two important features of test-item’s language authenticity are: the item’s possible compatibility with adjacent co-text, and its usability potential. These two features are not necessarily mutually exclusive; often a sentence high in real-life usability also stands the discourse-analysis test of co-text compatibility.

Mental processing

From here my linguistic speculations will take a further step by focusing on the learner. When the learners are presented with the prompt, they start interacting with it, attempting to figure out the target response. Does the thinking process required for manipulation of the test item resemble the one which is employed by L2 (second language) users in the course of actual speaking or writing? E.g. in the process of communication, does one start by producing a sentence and then proceed to look for the time expression, as in 11?

i (11) Add time expression.
p I go to America _______________.
tr (e.g.) every year

i (12) Make Passive questions about the underlined words.
p Gila had to write her homework again because she hadn’t written it clearly.
tr Why did Gila’s homework have to be written again?

Similarly, in their endeavor to produce meaningful utterances, do L2 speakers employ multiple transformations (passivization and question formation) as they are requested to do in 12? I maintain that none of these strategies, which might have some pedagogic justification, would normally be applied in L2’s spoken or written production. Conversely, other test items are high on the mental-processing scale. In ☺ (13) The Second World War ended _ 1945. and in ☺ (6) above, the blanks appear in places where an L2 user might be expected to stop and search for an appropriate word or expression. These assumptions are based on linguistic intuitions of a veteran language learner and teacher, and it would be interesting to verify them empirically.

To sum up: When writing our grammar-test items, let us avoid sentences with low contextualization and usability potential. As far as possible, the task-format should not engage the test-taker in mental manipulations which are alien to L2’s strategies during communicative language production. My sample of 200 test-tasks showed, that some task types, such as transformations (1 and 12) or co-occurrence of a verb-tense with a particular time expression (2 and 11), tend to produce unreal sentences, and engage the test-taker in mental processing which is non-existent in real-life L2 output. Some grammar topics such as modals (6) or prepositions (13), on the other hand, conveniently form authentic tokens.

As teachers, we put a lot of effort into constructing classroom tests and quizzes, not to mention the time spent on marking. We want our tests to be clear and doable, to provide a clear picture of our students’ achievement, to motivate further learning and to be administered in a positive and peaceful classroom climate. Tests based on these fundamental requirements (Hughes 2002) can be further improved if additional attention is paid to the language authenticity of test items.

References

McNamara, T. (2000) Language Testing. Oxford: OUP.

O’Keeffe, A. et al. (2007) From Corpus to Classroom. Cambridge: CUP.

Hughes, A. (2002) Testing for Language Teachers. Cambridge: CUP.

Yule, G. (2006) The study of language. Cambridge: CUP.

--- 

Please check the Humanising Testing course at Pilgrims website.

Back Back to the top

 
    © HLT Magazine and Pilgrims