Adaptive large language models for pronunciation training: a cognitive load perspective
Keywords:
Adaptive Large Language Models (LLMs), AI-based Feedback, Cognitive Load Theory (CLT), Pronunciation Training, Second Language Acquisition (SLA)Abstract
Background: Pronunciation remains a persistent challenge in second language acquisition, often linked to high cognitive load during perception and production. The emergence of Adaptive Large Language Models (LLMs) offers new opportunities for individualized pronunciation training.
Aim: This study aims to evaluate whether adaptive Large Language Model (LLM)–based feedback, grounded in Cognitive Load Theory, can improve pronunciation accuracy and efficiency while reducing learners’ cognitive load compared to conventional audio-lingual methods.
Method: This study integrates LLM-driven adaptive feedback with principles from Cognitive Load Theory (CLT). A quasi-experimental design was implemented with two groups: one trained with adaptive LLM-based pronunciation support and the other with conventional audio-lingual methods. Pronunciation accuracy, reaction time, and cognitive load (via NASA-TLX and pupillometry) were measured across 8 weeks.
Results: Findings indicate that adaptive LLM training significantly improved pronunciation accuracy (+15%) and reduced extraneous cognitive load compared to the control group. Reaction times also decreased, suggesting more efficient speech processing.
Conclusion: Adaptive LLMs can serve as effective pronunciation tutors, balancing instructional input with learners’ cognitive capacity. This integration contributes both theoretically by linking AI-based learning with cognitive load research and practically, by offering scalable, adaptive, and low-load pronunciation training tools.
Downloads
Downloads
Published
Versions
- 2025-05-10 (4)
- 2025-09-11 (3)
- 2025-09-10 (2)
- 2025-09-10 (1)