AI Developments in Language Education: What now? 

 

KR

Eaquals Online Conference Writeup 

by Kassandra Robertson

AI Developments in Language Education: What now? 

Recently I attended the annual Eaquals Online conference, held Oct 11-12, which brought together 30 speakers from at least 7 different countries (half the lineup from Turkey) and ELT professionals from across the network. The program featured an illuminating two-part plenary from Dr. Hakan Tarhan of the TOBB University of Economics and Technology on the responsible and effective integration of AI into language education and 24 other sessions on a diverse array of topics, facilitating a dynamic exchange of insights, experiences, and best practices. 


Having also participated in the Eaquals conference last April, it was interesting to observe the evolution of the discourse regarding the ‘AI issue’ now that these tools have been around for two years. AI-focused sessions this time offered a more in-depth look at all aspects of its integration into language education, exploring its limitations in terms of the mechanics of how LLMs process input (through ‘tokenization’) and emphasizing both the growing need for AI literacy, as well as, crucially, the primacy of the uniquely human capacities no machine can ever replicate. Most significantly, the event revealed an emerging clarity for navigating the ambiguities of AI integration in instruction and assessment, marking a definite shift toward a more practical set of principles and possibilities for an effective AI-empowered pedagogy.  


The plenary opened with a recounting of the reaction in education to AI since November 2022, acknowledging valid uncertainties and concerns but suggesting a gradual evolution toward ‘cautious optimism’ and growing recognition of AI proficiency as an essential skill for the modern workforce. Addressing challenges to academic integrity, Dr. Tarhan pointed out the inefficacy of bans and overly punitive responses to AI use, and delivered an insightful examination of the issues with detection tools, referencing Perkins et al. (2024b) which compared the accuracy of various detectors and found not only an abysmally low rate for Turnitin (8%) but considerable inconsistencies according to the particular AI tool and version (e.g., Claude 2, GPT-4) used, with paid models more impervious to detection than free ones (p. 15-17). This, he explained, together with the risk for false positives, genuine prospect that development of such detectors will lag behind that of AI technology, and mounting challenge to the notion that AI use has no place in student-written work, ultimately makes detection tools an unreliable determiner of actual academic integrity in the age of AI. 

Please click here to read the full review by Kassandra Robertson