Eaquals Conference 2024 – Lisbon:  Unlocking the Potential of AI for Empowered Learning

Lisbon1Eaquals Conference 2024 – Lisbon: 
Unlocking the Potential of AI for Empowered Learning
By Kassandra Robertson

This year I had the privilege of attending the annual Eaquals Conference held April 12th-13th in Lisbon as a speaker, together with SL colleagues Ilkem Kayıcan, Tuğba Kumbaşar, Serpil Öz, Nazan Gelbal, and Birgül Köktürk. The event brought together over 270 participants from 33 countries representing industries across the language education landscape, creating a rich environment for an invigorating exchange of experiences, best practices, and ideas at the forefront of learning science. Given the dramatic advent of AI with all its transformative potential and implications, a defining theme of the conference was ‘AI developments in language education’, featuring 15 sessions exploring different aspects of this vast, yet nebulous dimension to our brave new reality.

The opening plenary by learning technology expert Nik Peachey provided a fitting introduction to the opportunities and concerns regarding the impact of AI on various aspects of language education including assessment, materials design, and even the relevance of human teachers in an increasingly tech-powered world. In his session, Peachey addressed the limitations of these new tools but also highlighted their potential as a collaborator in the language learning process – as a customizable (and increasingly lifelike) conversation avatar, an on-demand tutor and feedback provider, and a powerful tool for delivering personalized, meaningful learning experiences. A particularly creative activity demoed during the session was an exercise in which ChatGPT 3.5 was prompted to respond like the ‘Sorting Hat’ in the Harry Potter series, bringing extensive reading to life in a virtual role-play. Another relevant concept raised in the session was the ‘SPACE’ framework, which maps the process of writing with the support of AI onto five main steps: Set directions (for the goals, content, and audience), Prompt, Assess the output (for accuracy, completeness, bias, and writing quality), Curate (select and organize), and Edit.
    
This endeavor to redefine and map target learning outcomes now being achieved with varying degrees of collaboration with AI was a resonant theme in conference discussions. Although there have been some efforts to formulate ‘ethical AI use statements’, institutional policies remain rather generic and conflicting, seeming for instance to give some leeway for use of AI ‘as a supplemental resource’ but then warning that “all forms of counterfeiting, assisted by AI-based technologies [are considered] a breach of academic integrity”. The ambiguity of this in a given course, with the added qualification that students are “only allowed to use AI when explicitly stated”, suggests the need for a more nuanced spectrum of policies which promote a constructive exploration within a set of clearly-defined and communicated boundaries and expectations, depending on the particularities of the context. Our challenge then, in this massive and emergent realm, is to establish guidelines in a manner that breaks down such lofty statements into practical principles and strategies to be applied within a particular course or learning domain. 

One such framework relevant to the issue of assessment in a university writing skills context was raised by Jeremy Simms in his session “AI in education: Encouraging cheats or unlocking potential?” which addressed the intrinsic limitations of AI-assisted writing (e.g., biases, hallucinations, generic uninspiring quality of output) while emphasizing the need to train students on a skillset compatible with the demands of a society in which transcribing screenshots constitutes ‘research’. Accordingly, Simms proposed a reimagining of Bloom’s Taxonomy, again mapping steps in the writing process performed with varying degrees of collaboration with AI – from prompting and drafting to maintaining a distinct writer’s voice – onto the broader higher order thinking skills of create, analyze, and evaluate. This framework corresponded to one in my session (modelled after Leon Furze’s AI Assessment Scale) analogously denoting the degree of AI use acceptable across three broad phases: with a green light for ideation on the one end (pre-writing techniques), a strict AI-free policy in the research and writing phase on the other, and a word of caution on technologically-assisted language support with a clearly communicated  expectation that, as Simms likewise emphasized, the submitted output not surpass the student’s ‘known capabilities’.
 
Another particularly illuminating session was David Byrne’s “Incorporating AI into a writing skills course”, which presented an actionable approach to integrating explicit AI instruction into an English class for business purposes. Byrne opened with a compelling discussion of what constitutes ‘the line’ when evaluating the authenticity of work produced using AI, pointing out that it would be madness not to use the tools at our disposal (e.g., Google Translate to write an email in another language) but emphasizing the extra effort involved in modifying the output such that the recipient wouldn’t know the difference. In other words, even if AI use is a given, there is still perceptible variation in the degree to which a high quality and appropriate output is successfully achieved. This insight brought to mind as a cautionary case in point the recent Willy Wonka disaster in Glasgow promoted as a dazzling ‘immersive experience’ but delivered as a disturbing mess of ‘AI-generated gibberish’ featuring unsettling ‘unknown’ characters and an Oopma Loompa scene likened to… a questionable science experiment. Point being that when there’s too much overreliance on AI, it’s not always so hard to tell; and by factoring this reality into the ways we train students and evaluate their work, it is possible to uphold the role of human intellect in the creative process. 

Lisbon2Byrne’s session further detailed a breakdown of the specific skills involved in producing AI-supported writing, with particular emphasis on prompting (applying the acronym RISEN as a reminder of the major elements in defining the output: Role, Instructions, Steps, End goal, and Narrowing), response analysis, and maintaining a distinct writer’s voice complemented by an awareness-raising discussion of relevant ethical issues and challenges. His approach to ‘inserting your voice’ was especially insightful, where students request AI feedback on how they ‘come across’ in their (analogue) emails and then compare their own writing to that of AI-generated replies to identify the components of their ‘signature style’ (e.g., tone, lexical idiosyncrasies, specific and personal information related to purpose and audience, and other hallmarks such as a particular use of emojis or punctuation). The feedback on this course revealed another fascinating insight, which is that whereas the instructors seemed more fixated on the role of AI, reporting “not [being] very familiar with these types of tools”, the students largely focused on how their writing had genuinely improved, suggesting a perception of AI collaboration as increasingly inextricable to the target skills themselves in language learning.  

Among practitioners in the international community, there seems to be widening acknowledgement of the inevitable impacts of AI on language education, demanding an upgrade not only in terms of ensuring fair and quality assessments but also learning outcomes that retain a lasting return in a shifting market. My own session, “Less but better: Streamlining technology-enhanced learning,” explored the potential of AI for empowering the acquisition of research writing skills by boosting cognitive capacity to facilitate the generative processing involving in learning. Given its critical role in the evolution of knowledge work (and requisite skills), the session emphasized the necessity of providing clear guidance in the ethics and methods of AI collaboration in an increasingly tech-forward society, providing a framework for this in the context of a university writing-intensive humanities course and demonstrating a specific classroom AI application for streamlining the higher-order processing involved in navigating the ideation phase of the research process through strategic generation of content reference points – in effect, self-scaffolding needed language and content with a built-in (though crucially fallible) ‘knowledgeable other’. 

The diverse perspectives and experiences represented at the conference brought to light many significant insights regarding the role of AI in the future of language education. Amid valid broader concerns regarding such matters as privacy, quality of information, and the specter of job displacement by automation, the one certainty is that AI is here to stay and only going to become a more integral aspect of our experience as time goes on. In every domain, it’s already transforming how we live, work, and learn, becoming an ever-powerful influence on our thinking and culture. But to my mind, especially after the Eaquals Conference, the inevitable challenges in navigating this ‘jagged frontier’ also present an exciting potential in prompting us, human educators, to formulate new solutions, redefine learning objectives to be better aligned with the demands of a more tech-integrated age, develop and communicate clear and specific policies for ethical AI use tailored to the learning objectives of a given course, deliver explicit instruction on effective AI collaboration through scaffolding and guidance, and devise assessments designed to more precisely target the individual performance behind the machine. 

As the ‘superhuman’ capabilities of AI continue to advance, that distinctly human quality of the mind will only become more valuable; and it will be our continued responsibility to ensure students are equipped with the technological proficiency, critical thinking capacity, and empathy to forge ahead with confidence into the digital age. 

 

Lisbon3b