I Made Sujana [Universitas Mataram]
Abstract. Following the development in language teaching principles from grammar-based to communicative, the language testing has also moved from testing language elements separately to testing language skills contextually. This movement is characterized by the movement in the language testing procedures. In testing listening, there is a change on focus from testing of sound discrimination and separate grammatical and lexical aspects of a language to testing broader communication skills. The listening tests try to consider naturalness and authenticity of speech in addition to the inclusion of various text types and testing techniques.
The development in language teaching approaches from the traditional to the communicative has an impact on the development in language testing. The language testing has moved from the grammar-based language testing (GBLT), which tends to focus on the testing one’s ability in understanding language elements separately to communicative language testing (CLT), which tries to focus on testing one’s ability in using the language contextually. In testing procedure tradition, there are also continuum movements from discrete point testing to integrative-point testing; from indirect to direct testing, from norm-reference to criterion-reference testing; and from objective to subjective testing.
In assessing listening abilities, the tradition of testing has moved from testing the sound discrimination and supra segmental features and separate linguistic component to testing the understanding of broader communication skills. Since the purpose of teaching listening is to enable students to understand native speech of English, the test should be designed to closely related to real-life communication/speech and the selection of task and question types should consider the tasks performed in daily life situations. In designing the listening tests, the designer should consider as much as possible the nature of spoken language (spontaneity, redundancy, hesitation, false starts, etc.). The testing of separate linguistic aspects, however, is still in use for the purpose of remedy.
This paper mainly focuses on the principles of constructing listening comprehension test for undergraduate students by discussing the nature of spoken language and considering aspects in developing listening tests. However, to provide more comprehensive understanding on the construction of the listening test, this paper will first review the movement in language testing from GBLT to CLT.
- Development in Language Testing: From Grammar-Based to Communicative Language Testing
The GBLT was adopted from the traditional approaches such as Grammar Translation, Audiolingual, Cognitive and Direct Method, which focus on grammar as content and expose learners to input in the target language that concentrates on one aspect of grammar system at a time. The GBLT claims that since language is built of sounds, morphemes, sentences and rules, the specific elements of the language should be taught and tested separately. The followers of the Grammar-Based theory assumed that grammatical or structural aspects of the language are the most useful to be imparted and that the ability to use language (functional ability) will automatically arise from the existence of the grammatical knowledge (Krahnke, 1987). This idea is in accordance with Widdowson’s (1988) comment dealing with the importance of teaching separate language aspects. He argues that in language teaching what the learner needs is a basic knowledge of language system and of lexical and grammatical forms capable of constituting a core linguistic competence. This competence later will provide learners with the essential basis for communicative behavior when the learner finds himself in a situation that requires him to use the language to communicate. This belief brings a consequence that what needs to be taught in language teaching is a knowledge of the language system; and the exploitation for communicative purpose can be left to the learner.
The belief that the learner has to be taught a knowledge of language system has an impact to what knowledge needs to be assessed in language testing. The language testing in this era tended to focus on testing separate language aspects; that is to say, testing one element of the language at a time and use indirect testing (i.e. testing abilities that underlie the language performance (skills)) such as testing writing through grammar, testing listening through minimal pairs and sound discrimination. Objective testing was preferred in GBLT because it was possible to cover representative samples to maintain validity (especially content validity) and reliability of the test. In addition, those kinds of testing are quantiable; hence, it is easy to calculate the validity and reliability of the test. Those two issues (validity and reliability) became main concern in this era.
However, along with the development in language teaching principles, the GBLT was criticized. The result of discrete-point testing (testing one language element at a time), indirect testing (testing ability underlying language skills) and objective testing did not reflect one’s ability in using language and had negative washback effects on both teaching and learning. This criticism has led to the application of more communicative testing, which is adopted from communicative language teaching approaches.
As noted earlier, Communicative language approach/method was born as a reaction to the failure of traditional language approaches. Traditional language approaches failed to bring learners to acquire the target language and teaching language aspects separately do not guarantee the ability of using language in real-life context. Along with the adoption of communicative approach in language teaching, the method of testing also changes. The method of testing has moved from testing separate language aspects to testing ability to use language for communication. The Communicative Language Teaching (CLT) emphasizes on the assessment of learner’s proficiency, that is, potential success in the use of language in some general sense (Morrow cited in Weir, 1990). It is based on the theories underlying current language teaching. It is not enough to impart students with just grammatical knowledge. What needs to be improved in language teaching using communicative approach is the ability to use the language for communication. The learner needs to learn appropriate behavior during his course since one cannot count on him learning it later simply by reference to this linguistic knowledge. The belief here is that communicative competence needs to be expressly taught: the learner cannot be left to his own devices in developing an ability to communicate (Widdowson, 1988).
Savignon who is in line with Chaplan (cited in Weir, 1990) criticized isolated language tests — such as tests of language elements — which cannot be used as a sufficient predictor of communicative skills. To be able to perform the knowledge of the components of language in real life situation needs new and appropriate ways (Morrow cited in Weir, 1990). Bachman (1991) argues that the communicative ability can be assessed by constructing tests that fulfill at least four criteria, namely: (1) tests should create an information gap; (2) the tasks of the test should be dependent; (3) there should be an integration between the test tasks and the test content; and (4) the tests should measure a broader range of language ability (such as cohesion, function, and sociolinguistic appropriateness).
This movement has also consequences to the development in language testing procedures. There are continuum movements in the language testing tradition from discrete point (i.e. testing a single independent skill (decontextualized)) to integrative point (i.e. testing learners’ capacity to use to use many bits at the same time (contextualized)); from indirect testing (i.e. testing language aspects that underlie the language skills) to direct testing (i.e. testing directly what need to be measured); from norm-reference assessment (i.e. testing how well a student achieves in relation to others) to criterion-reference assessment (i.e. testing what students can do or achive without being compared to other students); and from objective testing (i.e. tests providing options to be chosen) to subjective testing (i.e. testing of production (students produce something)).
As expected by communicative language testing, the language tests should replicate real-life situations and apply integrative, direct, norm-reference, and subjective testing procedures. This expectation, however, raises a number of problems in both constructing and scoring the test. As we observe, the application of communicative language testing is left far behind that of communicative language teaching.
- The Nature of Spoken Text and Listening Tests
Along with the movement in language testing traditions, the tradition of listening testing has also changed. The discrete-point and indirect testing are less preferred in testing one’s ability to understand spoken language since they focus on the assessment of linguistic components of a language. Such kinds of testing procedures are now used only for the purpose of remedy of certain linguistic aspects. Nowadays, the listening tests tend to assess broader communication skills (comprehension) by applying integrative and direct testing procedures. In this case, the broader communication skill involves more than the understanding of bits and pieces of the language but is concerned with the exchange of facts and ideas as well as interpreting the speakers’ intention (Madsen, 1983). The only way of assessing communicative performance is by procedures that replicate as closely as possible the actual circumstances under which the students will use the language. In testing communication, there should be integration of linguistic, situational, cultural, and affective constrains which interact in the process of communication (Brindley, 1986; Rea, 1985).
The construction of listening tests in some ways is similar to the preparation of listening teaching materials. Therefore, the effective way of developing listening test is through the provision of carefully selected teaching materials. The task of the test constructors is to adapt the existing teaching materials into listening tests in order to meet the criteria of a listening test. In selecting the listening material, the test developer should be aware of aspects involved in testing listening. In other words, s/he should know the nature of spoken language so that s/he can produce authentic and communicative listening tests. Heaton (1988) characterizes some aspects involved in understanding spoken language. Those characteristics are: (1) Repeating Information. The spoken language contains a lot of redundancy, and meaning is usually reinforced or repeated in several ways; (2) Pausing. Speakers often hesitate and pause when they speak, filling the gaps with sounds such as er, uh, and em. Sometimes they start a sentence, change their mind and start it again. Such false starts and hesitation features are an important part of the spoken language, helping listeners to follow what is being said; (3) Differences between Speaking and Writing. The structure of the spoken language differs from that of written language — the written language is organized in sentences, while the spoken language is organized in clauses. These clauses are often connected with conjunctive words like and, but, so. In spontaneous speech, complex sentences are rarely used; (4) What We Remember. People remember the general meaning of a sentence rather than the actual words; (5) The Importance of Context. We use language in a certain situation for a particular purpose. The meaning of utterances depends entirely on the context in which it is spoken; (6) Using Recorded Materials. Non-verbal behavior such as facial movements and gestures in speaking is very important in helping listeners to understand the messages the speakers are giving. Besides, in testing listening using recorded materials the listeners cannot move backwards and forwards as they do in taking other tests (Hughes, 1993).
The awareness of the nature of spoken language for listening tests closely relates to what need to be considered in constructing or developing the listening tests. The listening test should be designed as natural as possible. The important information should be reinforced in several ways as normally found in real-life situation. In order to help listeners to follow what is being said, the speech or conversation should include hesitation features. Be aware that the structures of the spoken language and the written language are different. Avoid using complex sentences in the speech. In constructing comprehension tasks, avoid the questions expecting students to write the actual words because for comprehension listeners tend to remember the general meaning of a sentence. In addition, it is important to set up the context of situation in designing the listening test to help students to make prediction and to be active listeners.
- Basic Considerations in Developing Listening Test
The first consideration to be taken into account is defining the instructional objectives. One purpose of a test is to measure how successful individual students, groups of students or the courses have been in achieving instructional objectives. Those instructional objectives must be used as a guideline in constructing the listening tests.
The second point to consider is the expected learning outcomes which are in harmony with the instructional objectives. Based on the instructional above, the listening test designers need to define the learning outcomes expected to be performed by the listeners in the tests. The listening skill taxonomy adapted from Weir (1993) and Rost (1990) can be used as a guideline in considering the expected learning outcomes. The taxonomy is as follows:
- Orienting oneself to a spoken text : (a) Identifying the purpose/genre of a spoken text; (b) Identifying the topic; (c) Identifying the broad roles and relationships of the participants.
- Identifying the main idea of the spoken text: (a) Distinguishing main ideas from supporting ideas; (b) Distinguishing facts from examples; (c) Distinguishing facts from opinion when explicitly stated in the text.
- Extracting specific information from a spoken text: (a) Extracting key details explicitly in text; (b) Identifying key vocabulary items.
- Understanding discourse structure and organization: (a) Following discourse structure; (b) Identifying key discourse/cohesive markers; (c) tracing the development of an argument.
- Understanding meaning not explicitly stated: (a) Relating utterances to the social/situational context; (b) Identifying speaker’s attitude/emotional state; (c) Recognizing the communicative functions of stress/intonation pattern; (d) Recognizing the speaker’s illocutionary intent; (e) Deducing meaning of unfamiliar words; (f) Evaluating the adequacy of the information provided; (g) Using information from the discourse to make reasonable prediction (Adapted from Weir, 1993; Rost, 1990).
In constructing the listening tests, it is necessary to consider representativeness of these learning outcomes in a listening test to improve the (content) validity of the test and the beneficial washback effects.
The next step is the selection of the samples of speech and text types. In this stage the test designer should decide whether using recordings or live presentation after considering the advantages and disadvantages of them. In fact, the ready-made samples of authentic speech can easily be found. The possible sources of these materials are radio broadcasts, teaching materials for listening. Suitable passages may be of various lengths, depending on what is being tested. The types of the texts might be in the forms of monologues, dialogues, multi-participants, announcement, talk or lecture, instruction, directions, etc.
The fourth stage is choosing the most appropriate task and question types to assess the desired learning outcomes. There are various types of tasks can be used in testing listening abilities such as picture-based, form-filling, information-matching, open-question, gap-filling tasks. Those tasks can be packed in various techniques such as multiple choice, short answer, note-taking, summary, filling flowchart, table completion, yes/no/no information statements, matching, sentence completion, etc. All of these must be carefully considered in order to fulfill the criteria of good test and capable of improving beneficial washback effect on teaching and learning.
- Possible Techniques for Testing Listening for Undergraduate Students
The general objective of the series of listening courses at English Education Department FKIP University of Mataram is to enable students, without obvious difficulty, to understand native speech of English delivered at normal speed either orally or through media. The courses are divided into four consecutive, graded sections, each with its own objectives and emphasis. The teaching materials consist of various types of English discourse such as dialogues, narratives, public speeches, radio/TV broadcasts, etc.
From the objective of the listening courses, it is clear that the focus the teaching will be on comprehension, that is to say, how listeners understand the spoken language. The assessment of the students’ achievement should, therefore, consists of items that measure students’ ability in comprehending spoken discourse. This can be achieved through the inclusion of various text types and techniques.
The listening tests used to measure students achievement at UNRAM, are mostly selected from published materials. The tasks of lecturers in this case are to select the published materials and then adapt them to meet the criteria for constructing the listening tests. The following are samples of text types and techniques commonly used for testing students’ achievement at English Department FKIP UNRAM. The samples of the tests can be seen in appendices.
- Understanding Statements, Questions, and Short Conversations
Since the listening to long conversations and talks can be very demanding on students, it is generally more appropriate for students in early stages (e.g. Listening I & II) to listen simple statements and questions and short conversations, on which the questions or pictures are based.
Short statement and question items are designed to measure students’ ability to understand short samples of speech and deal with a variety of signals on lexical and grammatical aspects of phonology. Since they are short, they do not resemble the natural discourse. The characteristics of real-life speech such as spontaneity, redundancy, hesitation and false starts, contexts, and ungrammatical forms are absent (Heaton, 1988;1991).
Spoken : “She used to take two tablets a day, but recently the doctor
doubled the dosage”
Written: A. Now she takes one tablet.
- Now she takes two tablets.
- Now she takes three tablets.
- Now she takes four tablets.
Spoken : “How did you go to Surabaya”
Written: A. No I didn’t.
- I am fine, thanks.
- I went last night.
- By plane.
These short statement items were used in the old format of Section A Listening Comprehension in TOEFL test. However, along with the development of language testing principles, since 1995 these short statement items were omitted and substituted with short conversations or conversational exchange. In short the conversational exchange, two people (usually a man and a woman) converse, followed by a third person asking a question. The students answer the questions by choosing the printed options (A, B, C, or D), that would be the best response of the question they have heard.
- Short Conversation:
Man : Let’s go to dance at the Student Center on Friday.
Woman : I’d like to, but I’m going to a lecture. Thanks for asking me
Third Voice : What is the woman going to do on Friday?
- Go to a dance
- Go to Student Center
- Go to a lecture
- Stay at home
These short conversations can be used for testing students’ ability at various levels, depending on the complexity of grammatical and lexical items and kinds of tasks (listening to direct information, drawing conclusion, making inferences, making prediction, etc.).
- Understanding Extended Conversations
In the extended conversations, two or more people interact to discuss a particular topic. Along with the development of language testing, expecting tests consisting of materials closely related to real life situations, these extended conversations are preferred for testing students in higher stages (e.g. Listening III & IV) since they can provide authentic samples of real-life speech. Since they are long, they give more chance to students to listen natural discourse as they find in real-life situation — they are spontaneous, the important information is reinforced in some ways, there are hesitations and false starts, and they are contextualized.
From these kinds of texts, it can be developed various forms of tasks such as picture based tasks, information-transfer tasks such as filling a form, completing diagram, following direction, etc., information matching tasks (MC, T/F/NI, matching), and open-question tasks (see appendix 2 for the use of extended conversations in testing listening).
- Understanding Talks and Lectures
Understanding informal and formal talks and lectures is an important skill for students at higher levels. In daily (social and academic) life, the students are expected to understand talks and lectures — they attend lectures, listen to radio, watch TV programs and movies. The talks and lectures can be used to test various kinds of expected learning outcomes — orienting oneself to a spoken text, identifying the main ideas, Extracting specific information, etc. The sources of the talks and lectures can be from ready-made teaching materials, radio and TV programs (news, announcement, speeches, etc).
The task types can be in various forms: picture-based, information transfer, information matching, open question, and gap-filling. These task types can be packed in various question types such as multiple choice, T/F/NI, Short answer, summary, labeling diagram or flow charts, matching or sentence completion (see appendix 3 for samples of listening tests for undergraduate students).
- CONCLUDING REMARKS
- Compared to the development of language teaching principles, which moves very fast, the development in language testing is very slow. Theoretically, it is possible to set up the criteria of language testing as well as design the test that fulfill the communicative language teaching principles. However, at the level of application, it is impractical and time-consuming and it is difficult to always and totally relate the tests to real-life situations, which is due to the complexity of aspects involved in communicative language testing and the limitation of time and resources for conducting tests. Therefore, the language test designers and language test practitioners still need to work hard to formulate the “communicativeness” in language testing in order to provide language testing principles that are in accordance with (or at least closely fulfills) the communicative language teaching principles so that the existing gap between communicative language testing and communicative language teaching can be minimized. In other words, the application of the communicative language testing is not left too far behind the application of the communicative language teaching.
- In designing listening tests, the test designer should first be aware of the characteristics of spoken language. The awareness of the nature of spoken language will help the designer to construct listening tests closely related to real-life situations. The authenticity and naturalness of the listening test can be achieved through the selection of various types of spoken texts and the inclusion of various testing techniques. These will raise the beneficial washback effects of the listening tests on teaching and learning.
- In addition to the awareness of the spoken language above, in designing achievement test for undergraduate students the test designers should consider procedures for test construction such as defining the instructional objectives, defining learning outcomes, selection of the samples speech and text types, and choosing the most appropriate types of the test items to the desired learning outcomes.
- Since the objective of the listening courses is to enable students to understand native speech of English delivered at normal speed, the listening tests should be directed to the performance of the understanding of spoken language for communication. The listening test should be direct and integrative from various text types packed on various testing techniques. Since the listening courses are graded, the test must be carefully constructed by paying attention to the levels of the students.
Bachman, L.F., 1990. Fundamental Considerations in Language Testing. Oxford: Oxford University Press.
Brindley, G., 1986. The Assessment of Second Language Proficiency: Issues and Approaches. Adelaide: NCRC.
Brindley. G., 1991. “Defining Language Ability: the Criteria for Criteria”, in S. Anivan (ed.). Current Development in Language Testing. Singapore: RELC, pp. 139 – 165.
Bridley, G., 1994. Assessing Achievement in the Learner-Centered Curriculum. Sydney: NCELTR.
Brown, J.D., 1996. Testing in Language Programs. New Jersey: Prentice Hall Regents.
Heaton, J.B., 1988. Classroom Testing. London: Longman.
Heaton, J.B., 1991. Writing English Language Tests. new edition. London: Longman
Hughes, A., 1992. Testing for Language Teachers. Cambridge: Cambridge University Press.
Krahnke, K, 1987. Approaches to Syllabus Design for Foreign Language Teaching. Englewood Cliff, NJ: Prentice Hall Regent.
Oller, J.W. Jr., 1979. Language Test at School. London: Longman Group Limited.
Oller, J.W. Jr., 1988. “Discrete-Point, Integrative or Pragmatic Test” , in Patricia A. Richard-Amanto (ed.) Making it Happen: Interaction in the Second Language Classroom. London: Longman Group Limited, pp. 372-381.
Rea, P.M., 1985. “Language Testing and Communicative Language Teaching Curriculum”, in Y.P. Lee, et al. (eds.). New Directions in Language Testing. Sydney: Pergamon Press, pp. 15-32.
Rost, M., 1990. Listening in Language Learning. London: Longman.
Skehan, Peter, 1991. “Progress in Testing: the 1990s”, in Alderson and North (eds.). Language Testing in the 1990s. London: Modern English Publication, pp. 4-21.
Shohamy, E., 1985. A Practical Handbook in Language Testing for Second Language Teachers. Tel Aviv: Tel Aviv University Press.
Weir, C., 1990. Communicative Language Testing. London: Prentice Hall.
Weir, C.J., 1993. Understanding and Developing Language Tests. Longman: Prentice Hall.
Widdowson, H.G., 1988. “The Notional Syllabus: Does it Lead to Communicative Competence?”, in Patricia A. Richard-Amanto (ed.) Making it Happen: Interaction in the Second Language Classroom. London: Longman Group Limited, pp. 291-293.
Appendix 2: Extended Conversations
Type of Text : Extended Conversation
Type of Task : Information Matching
Question Type : Multiple Choice
Source of Materials: Barron’s TOEFL
Instruction: In this test you will listen an extended conversation between two people. After the conversation you will be asked questions. Choose the best answer.
Type of Text : Extended Conversation
Type of Task : Information Transfer and Information Matching (Picture-Based Tasks)
Question Types : Multiple Choice and Fill in a form
Source of Materials: Passport to IELTS
Appendix 3: Samples of Talks and Lectures
Type of Text : Talks and Lectures
Type of Task : Information Transfer
Question Type : Fill in a form
Source of Materials: Streamline English Connection
Type of Text : Talks & Lectures
Type of Task : Information Transfer
Question Type : Fill in a Form & Questions and Answer
Source of Materials: Streamline English Connection
Type of Text : Talks & Lectures
Type of Task : Information Transfer
Question Type : Gap-filling
Source of Materials: Passport to IELTS
 Teaching staff at UPT Pusat Bahasa and English Education Department FKIP University of Mataram