Forum on Assessment Issues

FORUM ON ASSESSMENT ISSUES V

“LISTENING ASSESSMENT: ISSUES RELATED to SETTING

the LEVEL of LISTENING TEXTS and TASKS”

Review by Berna Akpınar

FOAI 5 was hosted by İzmir University of Economics on 17th April 2015. The theme of the event was "Listening Assessment and Issues related to setting the level of listening texts and tasks”. The primary aim of the forum was to look into different practices of various preparatory programs at universities as to how listening assessment is carried out, focusing mainly on the rationale for the selection of texts and tasks, and how their levels are set in each institution. 

The universities represented at the forum included Anadolu, Bahçeşehir, Bilgi, Bilkent, Ege, East Mediterranean, ITU, Izmir U. of Economics, Kadir Has, Özyeğin, Pamukkale, Sabancı, Şehir, TOBB, and Yaşar. Prior to the forum, there was an informative presentation on “Pearson Test of English Academic and Versant English Placement Test” by İpek Sarı, the assessment solutions manager at Pearson. After the presentation, the representative of each institution gave a brief presentation on their practices. This was followed by in-depth focus group discussions. Next, the focus groups prepared and delivered presentations based on their discussions, and finally, the forum ended with a whole group discussion.

A closer look at the different parts of the forum day:

“Pearson Test of English Academic and Versant English Placement Test” by İpek Sarı.

İpek Sarı’s presentation promoted the idea that the power of technology could be used to improve the process of placement assessment and focused on the benefits of automated scoring for testing English Language Proficiency. The presentation started with a brief comparison of Pearson Test of English Academic and Versant English Placement Test and expanded on automated scoring, which was claimed to be “sustainable, scalable, and reliable”. 

The highlights of the presentation included the following:

•How automated delivery ensures consistent item presentation across time, people, and locations. 

•That automated scoring enables efficiency, objectivity, fairness, and consistency. 

•That standardized results allow dependable decision-making process. 

If you are interested in these tests and how they are scored, you may visit Pearson’s website.


The forum

I.What each institution does regarding listening assessment:

The presentations that the representative of each institution delivered were organised under the same headings to ensure a common ground for discussion. I would like to summarise the main points and interesting practices under these subheadings.

➢Type of listening tests/tasks

oSabancı University seems to be one of the most courageous institutions in terms of task selection, namely using open-ended or short answer type of listening. 

oThe majority of the institutions prefer multiple choice questions, matching tasks.

oNote-taking is mostly done at higher levels, i.e. B1 & B2.

oWhile listening seems to be the most prevalent task type.

oThere seems to be a tendency to get students to listen to the texts twice regardless of their levels in some institutions. 

oListening is a part of quizzes, midterms, and end of course tests.

➢Type of listening texts

oDialogues, interviews, and mini talks are preferred for while-listening tasks and lecture is the common text type for note-taking.

➢Text selection criteria

oText selection criteria generally include level of difficulty, topic, length, appropriacy, syllabus objectives, materials in course books, readability scores, etc.

oSabancı University School of Languages has the longest listening, ie. up to 18 minutes whereas the maximum length of listening is 10-12 minutes in most institutions.

➢Medium of testing

oPre-recorded audio is the most common medium of listening assessment. 

oMost of the institutions use laptops during the test.

oA few institutions have a central audio system which enables them to let students have the test simultaneously.

 

II.In-depth Focus Group discussions:  

The focus groups discussed a variety of issues in detail. Some major ones are as follows:

•It is not easy to use a fully authentic text for assessment, so most listening texts are semi-authentic.

•If the test is in the morning, it is a good idea to give listening section later in the morning, namely as the second task of the overall exam so that latecomer issue can be solved.

•Micro skills seem to be under represented or ignored in exams.

In a nutshell, it was a nice forum enabling the participants to see what is done in other institutions and reflect on their practices. The overall feeling I had was that listening assessment is an essential part of each program and how it is approached does not seem to vary much across institutions.