The potential of Large Language Models in language education

Authors

DOI:

https://doi.org/10.31812/ed.650

Keywords:

Large Language Models, language education, machine translation, prompt programming, abstract textual reasoning, creative writing, paraphrasing

Abstract

This editorial explores the potential of Large Language Models (LLMs) in language education. It discusses the role of LLMs in machine translation, the concept of ‘prompt programming’, and the inductive bias of LLMs for abstract textual reasoning. The editorial also highlights using LLMs as creative writing tools and their effectiveness in paraphrasing tasks. It concludes by emphasizing the need for responsible and ethical use of these tools in language education.

Downloads

Download data is not yet available.
Abstract views: 577 / PDF views: 342

References

Bondarenko, O.V., Nechypurenko, P.P., Hamaniuk, V.A., Semerikov, S.O.: Educational Dimension: a new journal for research on education, learning and training. Educational Dimension 1, 1–4 (Dec 2019), doi:10.31812/ed.620

Brants, T., Popat, A.C., Xu, P., Och, F.J., Dean, J.: Large Language Models in Machine Translation. In: Eisner, J. (ed.) EMNLP-CoNLL 2007, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, June 28-30, 2007, Prague, Czech Republic, pp. 858–867, ACL (2007), URL https://aclanthology.org/D07-1090/

Luitse, D., Denkena, W.: The great Transformer: Examining the role of large language models in the political economy of AI. Big Data & Society 8(2), 20539517211047734 (2021), doi:10.1177/20539517211047734

Reynolds, L., McDonell, K.: Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA ’21, Association for Computing Machinery, New York, NY, USA (2021), ISBN 9781450380959, doi:10.1145/3411763.3451760

Rytting, C.M., Wingate, D.: Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning. In: Ranzato, M., Beygelzimer, A., Dauphin, Y.N., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 17111–17122 (2021), URL https://proceedings.neurips.cc/paper/2021/hash/8e08227323cd829e449559bb381484b7-Abstract.html

Swanson, B., Mathewson, K.W., Pietrzak, B., Chen, S., Dinalescu, M.: Story centaur: Large language model few shot learning as a creative writing tool. In: Gkatzia, D., Seddah, D. (eds.) Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, EACL 2021, Online, April 19-23, 2021, pp. 244–256, Association for Computational Linguistics (2021), doi:10.18653/V1/2021.EACL-DEMOS.29, URL https://doi.org/10.18653/v1/2021.eacl-demos.29

Witteveen, S., Andrews, M.: Paraphrasing with large language models. In: Birch, A., Finch, A.M., Hayashi, H., Konstas, I., Luong, T., Neubig, G., Oda, Y., Sudoh, K. (eds.) Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pp. 215–220, Association for Computational Linguistics (2019), doi:10.18653/V1/D19-5623, URL https://doi.org/10.18653/v1/D19-5623

Downloads

Published

09-12-2021

Issue

Section

Articles

How to Cite

Hamaniuk, V.A., 2021. The potential of Large Language Models in language education. Educational Dimension [Online], 5, pp.208–210. Available from: https://doi.org/10.31812/ed.650 [Accessed 8 October 2025].
Received 2021-12-04
Accepted 2021-12-07
Published 2021-12-09

Most read articles by the same author(s)