Generative AI and the language classroom

AI is turning the education world upside down. What do we need to know about generative AI and what are the challenges and benefits for language teaching?

boy using an AI tool on his phone in a classroom
Author
Nik Peachey


Artificial intelligence (AI) seems to be turning the education world upside down. We are hearing many claims – some of them exaggerated – both for and against its use in education. To help us better understand and use AI as language teachers, I'd like to take a balanced look at some of the challenges and potential benefits it can offer us and our learners.  

Before I start, I should emphasise that there isn't just one AI. Many companies are developing their own AI data, tools, bots and applications. Not all of these are based on identical models, and they can work in different ways. 

What is generative AI? 

Although it seems as if AI has come out of nowhere, the first AI chatbot was actually designed as a therapist back in 1966. This was a simple computer program that could respond to questions and statements and help people explore their problems. Although many people were convinced that they were interacting with a real person, this was a simple program that was designed to give a limited number of pre-programmed responses based around keywords. If it didn't recognise any of the keywords, it would just echo back the users' statements. 

Generative AI is different. It is powered by a large language model (LLM) that contains an analysis of a vast amount of text. The texts have been atomised (broken into very small pieces), and the relationships between all the words in the text have been analysed in order to help the model 'understand' context and make predictions about syntax. An average-sized LLM contains around 100 billion data points. Open AI's ChatGPT is said to be based on 1.7 trillion data points, so these are huge collections of atomised and analysed word relationships.  

So the LLM is a collection of analysed word relationships rather than a collection of documents. When you ask generative AI a question, it isn't going to a collection of documents and pulling out a part of an article that someone has written, it's producing something new and authentic based on predictions about its understanding of the relationship between words and context. 

What is AI not? 

When thinking about generative AI, it's worth also considering what it's not. AI doesn't have 'agency'. It responds to commands and will do nothing without human prompting or programming, so at least at present we don't need to fear it taking over the world or taking our jobs. It also can't think. When we ask it a question or to do something, it isn't making decisions or responding based on knowledge or experience, it's merely going through a computational process that involves sorting through data and producing text. 

How do we interact with AI? 

If we want to interact with AI, we can do so using prompts. These are commands, usually delivered using text or speech-to-text. Prompts can be short and very much like a conversation, but they can also be quite long, like small computer programs with a string of instructions and rules for the AI to execute. 

When I first used AI with my students back in 2001, they were able to ask it questions and get what looked like intelligent answers, but when they tried to follow up the answers, the AI soon became confused. It couldn't understand references back to the previous part of the conversation. One of the remarkable things about generative AI is that it can follow conversational threads, so you can interact with it in a very similar way to talking to a real person, and this makes it much more natural and easy to use. It also opens up many more possibilities for designing language practice tasks. 

One of the most important aspects of learning how to use AI is learning how to prompt it effectively. We can't assume that it has the same shared knowledge that we have when we talk to another human. It's also very literal in its understanding of language, so you need to choose your words carefully. It's important that your prompt includes information about what you want, why you want it, the context, who the text is for and something about the level of the text; this could be academic level, level of maturity (e.g. suitable for a child of 5) or language level. You could also include information about the style, genre or format of what you want it to produce. Be prepared to experiment with your prompts and try different ways of expressing what you want, because they don't always work the way you want first time. 

What can generative AI be used for? 

Generative AI can respond to prompts by producing a range of different media types. 

It's commonly used to produce text, but it can also be used to produce other media types too.

  • We can use it to produce images by describing what we want to see and the style of the image we want it to produce.  

  • It can be used to produce audio monologues and dialogues, using a wide range of authentic-sounding voices and accents, based on a script we provide.  

  • It can be used to produce music and songs based on a description, style, and theme.  

  • It can even be used to create life-like video based only on text descriptions.  

It can produce all these media types at rapid speed. This offers teachers a huge range of potential uses that can save us a lot of time and effort.  

  • We can use it to plan lessons for us and to create the worksheets, images and media content for the lessons. 

  • We can use it to produce questions and quiz-type activities based around any media we can find or produce. 

  • We can use it to produce a syllabus, to create assignments and assignment rubrics, and to mark, grade and give feedback on the assignments based on those rubrics. 

  • We can use it as a mentor and adviser to help us develop our knowledge and reflect on our teaching experiences. One way of doing this is to ask it to guide us through a specific reflection framework such as Gibbs' Reflective Cycle. 

Of course, anything we produce should be carefully checked and edited to make sure that it's accurate and appropriate for our learners. 

  • Learners can use it as a role-play partner to practise their speaking and listening in a range of typical situations. 

  • AI can be programmed to immerse learners in a text and make them part of the story by giving them various options as they progress through the narrative. This can make extensive reading far more individualised and engaging. 

  • They can use it to get feedback on their writing and advice on how to improve it. 

  • They can even use it to practice their speaking and get feedback on their pronunciation. 

These are just a few examples from a resource book containing over 100 suggestions that I produced for the British Council, AI activities and resources for English language teachers

How will AI impact writing skills? 

One of the most difficult questions to answer about AI is its potential to impact the way people create text and consequently how we teach writing skills.  

The use of AI in the workplace is already commonplace, and it's being used to produce documentation, business communications, marketing text and journalistic articles, among a wide range of other things. If this is the case, then we need to provide our learners with the skills and literacies they will need to use AI to produce these different kinds of texts, but we also need to be sure we still develop their understanding of what constitutes well-written and structured text. 

What about using AI for cheating? 

One of the greatest fears of many teachers is that their learners will use, or are already using, AI to do their work for them, and, in doing so, bypassing the learning process. This is a particular fear for teachers in tertiary-level education, where assessment is much more dependent on longer written assignments. This is a legitimate fear, but we must remember that cheating isn't a new development. Students have been trying to cheat for generations. Recent research from Stanford Graduate School of Education discovered through an anonymous questionnaire that the percentage of students cheating before 2022 and the emergence of generative AI and those cheating afterwards is pretty much the same, at around 60–70 per cent. Although this percentage is obviously still too high, what it tells us is that the only thing that has changed are the tools they use to cheat, and it's high time we started to rethink assessment and how we evaluate our students' learning. 

There are of course companies that are producing tools that claim to be able to detect whether AI tools have been used in the production of a text or image but – particularly when it comes to text – these can be very inaccurate (only around 40 per cent). They are also easily fooled (there are other tools designed specifically to do this) and there are instances of learners having their work identified as AI-produced when they have only used a grammar checker. 

One solution to the problem would be to ban the use of AI in schools, but first, this would be very difficult to enforce and second, we must ask ourselves what kind of institution bans the use of a tool that has such huge potential to support learning and teaching. Surely a better way to approach this is to look again at how we assess learning and how we ensure that learning is taking place.  

What about AI and plagiarism? 

An issue related to cheating is plagiarism. There are different kinds of plagiarism, but generally it can be defined as 'presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement' (University of Oxford) .

This is a complex issue, if we acknowledge that the future of writing includes learners using AI tools in a collaborative way to help produce text. Glenn Kleiman, a senior adviser at the Stanford Graduate School of Education, suggests a framework (the SPACE framework) for the use of AI in academic writing that includes learners 'consulting' with AI, brainstorming with it, getting it to produce multiple parts of a text and then editing and curating the output into their own version. The problem here, though, is that the human and AI parts will be so deeply merged, how and at what point do learners acknowledge the parts of the text that have been produced by an AI tool or by the human writer? Is it enough for learners to acknowledge at the beginning that AI has assisted in the creation of the text? 

What about AI and copyright? 

We and our learners need to be aware of copyright and check the terms of use on whichever platform we are using to create content. When you use an AI tool to create a text, it isn't stealing that text from another source, it's generating another text based on its understanding of all the similar texts it has analysed, so the actual content it creates doesn't breach copyright. However, we need to be sure that in using the text in materials we produce, we aren't stealing that content from the platform we used to create it. Most commercial AI platforms that are designed for the creation of various types of content will most likely allow you to use the content you create in any way you want to, but if you are using free sites, it may be the case that they retain ownership of the materials, so be sure to check the terms of service and make sure you have the right to use the content before you build it into your courses. 

What about AI and bias? 

There has been a lot of heated discussion about bias in AI, and there are two sides to this debate.  

First, AI will reflect any bias in the data it is trained on. As it seems that, in most cases, AI language models are being trained on data that is found openly on the internet, then any overall biases in that data will be reflected in what the AI knows and can produce. As the majority of information on the internet was created in English and with a Western cultural bias, then any results from those AI language models are likely to be projected onto the AI output. This is something that learners need to be made aware of, and they should be encouraged to question any content that AI produces for them. 

Second, and on a more positive note, when AI interacts with a user, unlike people, it does so without bias. An AI chatbot will give each user the same level of service and attention regardless of their gender, age, race or culture, although there is some research that suggests some LLMs give better responses to users who use polite wording when interacting with them. 

What about AI hallucinations? 

Generative AI has developed something of a reputation for making things up. This has become known as hallucinating. This can be caused by several things. It can be misunderstanding of a prompt, errors or insufficiencies in the training data or simply the creative nature of the AI. 

This can be seen as both a positive and negative attribute. On the one hand it means that information you get from AI may be incorrect, but in terms of teaching, this does mean that learners and teachers must check the information AI produces for them, and this checking can become a valuable skill and part of the learning process. 

The other advantage of this ability to hallucinate is it also relates to the AI's capacity for creativity and invention. These are capabilities that can be valuable when producing more creative and fictional materials. 

Will AI teachers replace us? 

Certainly, in the short term, generative AI won't replace teachers. Unlike teachers, AI has no agency or motivation; it needs to be prompted by humans in order to do anything. However, trained and determined learners working with AI can certainly use it to improve their language skills, and this may well be a good option where trained teachers are scarce or hard to recruit. What AI can do, though, is become a useful assistant to the teacher and help do some of the administrative, planning, materials creation and evaluation work.  

In the end, whether we use generative AI in our teaching or not should come down to one single question: Do we believe it can aid and support our students' learning? If the answer to that is yes, then we need to start engaging with these tools and working out the 'hows' and 'whens' of using them, however uncomfortable that may be for us. 

References 

Kleiman, G. 2023. Teaching Students to Write with AI: The SPACE Framework https://medium.com/the-generator/teaching-students-to-write-with-ai-the-space-framework-f10003ec48bc 

Spector., C, 2023. What do AI chatbots really mean for students and cheating? https://ed.stanford.edu/news/what-do-ai-chatbots-really-mean-students-and-cheating  

 

Nik Peachey is Director of Pedagogy at PeacheyPublications, an independent digital publishing and consultancy company that specialises in the design of digital learning materials for teachers and trainers. He has been involved in education since 1990 as a teacher, trainer, educational consultant and project manager. He has more than 25 years' experience of working specifically with online, remote and blended learning environments. He has worked all over the world, training teachers and developing innovative and creative products. He is editor of the Edtech & ELT Newsletter.  

You can find out more about what he shares at: Twitter: @NikPeachey   LinkedIn: Nik Peachey

Research and insight

Browse fascinating case studies, research papers, publications and books by researchers and ELT experts from around the world.

See our publications, research and insight