Large language models (LLMs) are based on deep neural networks and often engineered with transformer architectures. They are built with hundreds of millions and even billions of parameters and pre-trained with large quantities of language data. LLMs have made significant strides in recent years on a wide range of natural language processing (NLP) tasks including language generation, summarization, comprehension, and classification (Brown et al., 2020). Recent LLMs, such as GPT-4 (OpenAI, 2023) and LLaMA (Touvron et al., 2023), have demonstrated a remarkable ability to understand and generate human-like text either with proprietary API services or in open-sourced approaches, making them valuable tools for a variety of applications, including education. Since LLMs demonstrate transferability through inheriting semantic and contextual understanding ability from pretraining, it fits well in the context of learning engineering and learning analytics (Baker, 2023) because they provide reusable and scalable technical architectures in various subjects (e.g., math, Scarlatos & Lan, 2023, Shen et al., 2021; science, Cooper, 2023; medicine, Luo et al., 2022). Early integrations of LLMs into educational settings have demonstrated promising results to augment learning through item response and student knowledge tracing models for open-ended questions (Liu et al., 2022), socio-emotional support (Li & Xing, 2021), automatically generating educational content (Sarsa et al., 2022), especially questions (Wang et al., 2021), and automatic contextual feedback (see the review of Hahn et al., 2021). The potential extension of LLMs to process multimodal data further empowers researchers and practitioners to support students’ learning with various data sources and formats.
Despite the promise of LLMs in education, there is still a need to explore their potential impact, limitations, and ethical considerations. For example, little is known empirically about the learning experience design of LLM-enabled educational applications and their impacts on students’ motivation, engagement, self-efficacy, and learning outcomes. Additionally, LLMs have been predominantly trained with English and adult texts, while relatively less non-English and K-12 data has been involved in LLM development, potentially leading to equality and equity issues in education (Abid et al., 2021; Ariely et al., 2022; Kasneci et al., 2023). Finally, there are ethical concerns (e.g., factuality, safety, fairness, and transparency) in LLMs, resulting in uncertainties in building sustainable and trustworthy AI systems in education (Kasneci et al., 2023; Li et al., 2022). This special issue aims to collect, review, and publish research that investigates the use of LLMs in educational contexts, addresses the challenges and opportunities associated with their deployment, and furthers our understanding of how LLMs might change the nature of teaching and learning (e.g., forms of assessment, computing education).
In order to advance our understanding of the role, technicality, and ethics of LLMs in education, IJAIED is pleased to announce a special issue on “Use of Large Language Models in Education.” The rationale of this special issue is to bring together cutting-edge research that explores the technical extensions of LLMs in AIED, investigates the design and development of LLM-powered implementations in educational settings, highlights the challenges and opportunities associated with their use, and provides insights into how LLMs can be effectively integrated into educational practices and how and under which conditions they might change educational practices, perhaps fundamentally so. We welcome contributions that align with the aims and scope of IJAIED, focus on the use of LLMs in education, and provide evidence of their impact on teaching and learning.