Takeaways from AIEOU Collaborator Conference at Oxford University

Takeaways from AIEOU Collaborator Conference at Oxford University
by Özlem Zengin & Züleyha 
Tülay 

On 15–16 September, we had the privilege of attending and presenting at the AIEOU Collaborator Conference at Oxford University. The event gathered educators, researchers, policymakers, and edtech innovators to explore how artificial intelligence is transforming education. Across two days, the sessions revealed both opportunities and challenges: AI as a driver of personalized learning, ethical dilemmas, professional development needs, cultural implications, and institutional readiness.

 

zz1


  
Our Contributions

Özlem Zengin

1. Boosting English with AI: Vocabulary and Reading Gains

This presentation reported findings from my 9-week mixed-methods study with 30 B2-level English learners.
●    Design: The experimental group used Quizalize and Newsela, while the control group relied on traditional materials.
●    Results: Pre- and post-tests showed significant vocabulary and reading comprehension gains for the experimental group, supported by large effect sizes.
●    Learner voices: Focus groups highlighted increased motivation, stronger vocabulary retention, improved reading confidence, greater autonomy, and appreciation of personalized learning.
●    Implications: The study demonstrates how AI can be responsibly embedded into existing curricula to amplify outcomes, and it offers teachers and institutions a replicable framework.

 

zz2


Özlem Zengin &  Züleyha Tulay
 

2. Building AI Readiness Among Educators: Designing and Evaluating a Teacher-Led INSET Model for Higher Education

This presentation introduced From Awareness to Action, a year-long AI INSET program at Sabancı University School of Languages.
●    Theoretical basis: The program is grounded in the i-TPACK model (integrating content, pedagogy, and AI-related knowledge).
●    Scope: 20–30 instructors are expected to participate in cycles of workshops, collaborative planning, and reflective practice.
●    Findings: Teachers may develop AI literacy and tool confidence, while also grappling with ethical questions and institutional barriers.
●    Takeaway: Teacher-led, context-driven design fosters deeper engagement and ensures that AI integration remains sustainable and ethically grounded.

Sessions That Stood Out

The conference featured a rich variety of parallel sessions. Below are some highlights that left a strong impression on us:

- The conference started with Anne Trefethen's presentation on "leading digital transformation at the University of Oxford". She discussed the expansion of generative AI usage since 2022, the design principle at the heart of Oxford's digital transformation, as well as the delivery principles. She mentioned the AI Governance Group that is made up of academics, teaching and research support and cybersecurity and AI experts.

- Elizabeth Wannacott from Oxford University focused on language learning and AI. She discussed how AI can support language learning. She mentioned that with learning analytics, practice can be personalized around what language an individual child knows/doesn't know. She also highlighted the importance of high quality, user-informed research. She highlighted the fact that there are unprecedented opportunities to support teachers, enhance pedagogy and bridge educational gaps. She concluded her presentation by saying that researchers and educators and like-minded developers must play a central role - let's not hand over the future of education over to Big Tech.

- In his presentation, Mark Sharples from the Institute of Educational Technology at the Open University, highlighted the fact that AI is not a thing, but a series of systemic disruptions. He said that to manage and innovate education for an AI-infused world we need to take a holistic system approach. He also shared a 10-step systems approach to educational innovation with AI. Some of these steps are building awareness, identifying opportunities and fostering collaborative cultures. He also stated that taking a systems approach to education innovation will not solve the problem of adapting education to an AI-infused world, but it will help to ask the right questions, reduce blockers, build resilience and design future policy.

-Dr. Aia Shilibekova and Dr. Noosha Mehdian from University Canada West, Vancouver presented their work titled "Framework Fever: a typology and critical cartography of AI-in-Education models. They focused on the unprecedented surge of frameworks and the recurring weaknesses in current frameworks, which are theoretical thinness, geographical imbalance, equity gaps and practice disconnect.

- Mel Sellick focused on "Human Readiness". She discussed what is missing is the psychological and relational capacity to interact with systems that simulate care but can't reciprocate.

- Clare Jarmy focused on the epistemic co-responsibility and the classroom. She discussed cultivation of meaningful knowledge and understanding as well as fostering intellectual virtues. She also discussed if AI tutors can have epistemic responsibility, which is made up of capacity to hold beliefs, agency, capacity to be truth-seeking and responsiveness to others.

- Natasha Banks from "Day of AI Australia" demonstrated a scalable AI literacy model for schools. She pointed out to the fact that AI offers the opportunity to either close or further widen the digital divide for at-risk groups. The solution was to implement a ready-to-go AI literacy program for Australian students and their teachers. In this model, they adapted global content for local impact and reach. Over 200.000 students and more than 800 teachers have participated in the program since 2022.

- Dr. Rachel Toncelli from Northeastern University talked about a book club that brought educators together. The simple idea of a book club was offered as a solution to some AI literacy challenges. The book titled "Artificial Intelligence, Real Teaching" was assigned to the participants which helped them expand their knowledge about Gen AI. The benefits of the book club were collaborative learning, mutual empathy, and a safe and supportive environment.

- Professor Mark Bennett from Charles Sturt University introduced the framework called SECURE GenAI. The goal was to enable staff to assess whether their planned use case for GenAI can be safely employed without explicit university approval. They also wanted to provide a structured and transparent decision-making process for staff, ensuring they feel empowered to explore GenAI tools within safe boundaries, regardless of their prior knowledge of GenAI.

- Noosha Mehdian discussed "holding fear, honoring hope from professional anxiety to agency in the AI era". She focused on the question of whether AI makes us replaceable now that AI can do all of these things 24/7. She highlighted the fact that AI cannot replace the human capacity to seek truth, not just process patterns.It also cannot replace critical evaluation, collective meaning-making and moral reasoning. She asked the question: How do we design human-AI collaboration that preserves education's essential purposes? The solution was to come up with learning experiences that amplify rather than replace human capacities. She also focused on the fact that educators are not redundant, but reimagined. They are the curators of learning experiences that matter. They are the critics of algorithmic bias and technological determinism. And they are also the cultivators of classroom cultures where students feel brave enough to ask transformative questions.

- Dr. Robert Farrow from the Open University focused on the positive and negative aspects of AI and discussed some domains such as data, research, governance, ethics/safety. He also mentioned some AI ideologies such as utopian AI and catastrophic AI.

- Dr.Manish Malik and Dr Julie-Ann Sime focused on a GenAI-assisted scoping review and the lessons learnt. They researched how reliable the use of AI can be embedded in the process of carrying out a scoping review. They concluded that each subject domain is different so researchers should share the evaluation scores between human and AI classification, data extraction, etc as they did for transparency and trust.

- Tina Austin discussed the Unblooms model; a problem-centered approach to learning design in the AI era. She asked if rethinking Bloom's taxonomy in the age of AI needs a reboot. She highlighted that the core principles of unblooms learning is non-linear and recursive; problem-solving drives engagement; AI serves as a cognitive amplifier.

- Faye Palmer shared NPEP, The National Professional Enquiry Project, a Welsh Government-funded project that was initiated to develop teachers as enquirers to support the new Curriculum for Wales. They wanted to focus on the reading outcomes of learners using AI tools.

- In his presentation titled "Generative AI to augment and accelerate educators", Bert Verhoeven focused on why AI is a game-changer and the limitations of AI. He discussed the human-centric AI-first pedagogical framework (HCAIF) and highlighted the importance of improving scaffolding, building confidence and ensuring balance between human-AI interaction.

- Irene Picton from National Literacy Trust focused on young people and their teachers' use of generative AI to support literacy in 2025. The Annual Literacy Survey in 2025 showed that 45.6% of students, who were between the ages of 13 and 18 used GenAI weekly or more often. They also used it to support writing and reading. 1 in 4 young people said they just copied what they got from AI. The survey also showed that the percentage of teachers using AI has almost doubled since 2023. More teachers are using AI to create lesson resources, generate model answers, and adapt or differentiate content in 2025. Findings showed that more young people are using GenAI in interactive, creative, and critical ways to enrich practices they already enjoy.

- Luisa Baum, Ewelina Lacey, Lori Robbings and Casandra Silva Sibilin shared the results of an international survey. Top 5 ways students report using GenAI were summarizing articles or papers, summarizing lecture notes, study guide& flashcard generation, understanding complex topics and translating and improving writing clarity. They also use it for resume/job application assistance, health & fitness coaching, stress management tips, therapy & life coaching and companionship.

- Erdinç Saçan from Fontys Venlo University of Applied Sciences highlighted that there are thousands of GenAI tools but improving AI literacy and not teaching like it is 2010 is what actually matters. He mentioned that when we embrace focus on process, not just the product and when we focus on real-world projects, coaching and growth-based assessment, then we can raise the bar. He focused on the essential elements of AI integration in educationç These are time for teachers, time for AI training, process monitoring and personal feedback. A flexible curriculum and ethical considerations must also be rethought.

- Tatjana Titareva from Sweden, Umea University discussed if AI makes learning less stressful. She mentioned that responsible AI solutions need to be social rather than technical. She highlighted that when using AI, teachers should think about which tasks are suitable for learners to complete with the assistance of AI, pay attention to stimulating learners' intrinsic motivations, and develop scaffolding to assist learners in active learning.

- Dr. Samantha Curle from the University of Bath focused on the comparative analysis of guided and unguided AI-assisted writing in English as a foreign language education. In their research they researched how the satisfaction level of EFL students differ when using guided versus unguided AI writing tasks. They also looked at the impact of guided versus unguided AI writing tasks on academic integrity. One group made use of a structured writing guide and ChatGPT. The other group had ChatGPT without guidance. The first group of students expressed high satisfaction with the experience, while the second group said they felt lost and did not know how to use ChatGPT effectively and ended up copying and pasting outputs. The takeaways from this study was that AI can enhance or undermine writing and that guidance is the critical factor.

Key Reflections

A few unifying lessons emerged across these sessions:

●    AI is systemic, not isolated. Its impact spans assessment, pedagogy, policy, and professional development.
●    Human readiness matters as much as system readiness. AI fluency without ethical or developmental grounding can create fragility.
●    Frameworks provide pathways. From S.E.C.U.R.E to CO-STAR, practical models are emerging to guide responsible adoption.
●    Culture and ethics must come first. Without grounding in traditions and values, AI risks reinforcing cultural erasure and inequity.

The AIEOU Collaborator Conference not only provided space to share our own research but also challenged us to reflect more deeply on how AI should be integrated into education: responsibly, ethically, and in ways that empower both learners and teachers.

ZZ3


 
Özlem Zengin & Züleyha Tülay

Please click here for the infographic summary.