Estudios e Investigaciones

The new reality of education in the face of advances in generative artificial intelligence

La nueva realidad de la educación ante los avances de la inteligencia artificial generativa

Francisco José García-Peñalvo
Universidad de Salamanca, Spain
Faraón Llorens-Largo
Universidad de Alicante, Spain
Javier Vidal
Universidad de León, Spain

The new reality of education in the face of advances in generative artificial intelligence

RIED-Revista Iberoamericana de Educación a Distancia, vol. 27, núm. 1, 2024

Asociación Iberoamericana de Educación Superior a Distancia

How to cite: García Peñalvo, F. J., Llorens-Largo, F., & Vidal, J. (2024). The new reality of education in the face of advances in generative artificial intelligence. [La nueva realidad de la educación ante los avances de la inteligencia artificial generativa]. RIED-Revista Iberoamericana de Educación a Distancia, 27(1), 9-39. https://doi.org/10.5944/ried.27.1.37716

Abstract: It is increasingly common to interact with products that seem “intelligent”, although the label “artificial intelligence” may have been replaced by other euphemisms. Since November 2022, with the emergence of the ChatGPT tool, there has been an exponential increase in the use of artificial intelligence in all areas. Although ChatGPT is just one of many generative artificial intelligence technologies, its impact on teaching and learning processes has been significant. This article reflects on the advantages, disadvantages, potentials, limits, and challenges of generative artificial intelligence technologies in education to avoid the biases inherent in extremist positions. To this end, we conducted a systematic review of both the tools and the scientific production that have emerged in the six months since the appearance of ChatGPT. Generative artificial intelligence is extremely powerful and improving at an accelerated pace, but it is based on large language models with a probabilistic basis, which means that they have no capacity for reasoning or comprehension and are therefore susceptible to containing errors that need to be contrasted. On the other hand, many of the problems associated with these technologies in educational contexts already existed before their appearance, but now, due to their power, we cannot ignore them, and we must assume what our speed of response will be to analyse and incorporate these tools into our teaching practice.

Keywords: artificial intelligence, generative artificial intelligence, ChatGPT, education.

Resumen: Cada vez es más común interactuar con productos que parecen “inteligentes”, aunque quizás la etiqueta “inteligencia artificial” haya sido sustituida por otros eufemismos. Desde noviembre de 2022, con la aparición de la herramienta ChatGPT, ha habido un aumento exponencial en el uso de la inteligencia artificial en todos los ámbitos. Aunque ChatGPT es solo una de las muchas tecnologías generativas de inteligencia artificial, su impacto en los procesos de enseñanza y aprendizaje ha sido notable. Este artículo reflexiona sobre las ventajas, inconvenientes, potencialidades, límites y retos de las tecnologías generativas de inteligencia artificial en educación, con el objetivo de evitar los sesgos propios de las posiciones extremistas. Para ello, se ha llevado a cabo una revisión sistemática tanto de las herramientas como de la producción científica que ha surgido en los seis primeros meses desde la aparición de ChatGPT. La inteligencia artificial generativa es extremadamente potente y mejora a un ritmo acelerado, pero se basa en lenguajes de modelo de gran tamaño con una base probabilística, lo que significa que no tienen capacidad de razonamiento ni de comprensión y, por tanto, son susceptibles de contener fallos que necesitan ser contrastados. Por otro lado, muchos de los problemas asociados con estas tecnologías en contextos educativos ya existían antes de su aparición, pero ahora, debido a su potencia, no podemos ignorarlos solo queda asumir cuál será nuestra velocidad de respuesta para analizar e incorporar estas herramientas a nuestra práctica docente.

Palabras clave: inteligencia artificial, inteligencia artificial generativa, ChatGPT, educación.

INTRODUCTION

Research on artificial intelligence (AI) has been steadily growing for years and shows no signs of slowing down, providing increasingly complex, larger, and faster-responding models. These models are currently trained with billions of data units, making them much more powerful than their counterparts from just a few years ago, which has led to significant advancements in linguistic models within a short span of time. This power has practically limitless applications, including ethically questionable uses, leading to legal loopholes and extreme reactions that go as far as banning their use. However, innovation based on intelligent technologies is experiencing significant growth, as the data reflects. For instance, in the past 10 years, the number of new AI companies has tripled. For example, out of the 273 companies in the latest YC (Y Combinator) batch, 57 are involved in generative AI, indicating an increase in job positions that require or are related to AI competencies (Maslej et al., 2023).

To analyse the topic of AI and education, setting aside aspects related to education policies and management and focusing on the relationship between AI and learning, Wang and Cheng (2021) identified three main research directions in AI in education. These directions are learning with AI, learning about AI, and using AI to learn how to learn. To simplify further and enhance the understanding of this article, we will differentiate between two aspects: the use of AI in teaching and learning processes and its impact on them (AI in education) and the role of education in fulfilling its purpose in society in the presence of AI (education in the age of AI).

AI in education

The Beijing Consensus on Artificial Intelligence (UNESCO, 2019) aims to address the opportunities and challenges presented by AI concerning education and provides 44 recommendations grouped into different aspects that help understand the magnitude of the task. These aspects include planning AI in educational policies; AI for the management and delivery of education; AI to support teaching and teachers; AI for learning and assessment of learning; developing values and skills for life and work in the AI era; AI for providing lifelong learning opportunities for all; promoting equitable and inclusive use of AI in education; gender-responsive AI and AI for gender equality; and ensuring ethical, transparent, and verifiable use of educational data and algorithms. These aspects are discussed in more detail in AI and education: guidance for policy-makers (UNESCO, 2021). As is evident, the relationship between AI and education is complex and multifaceted.

Setting aside aspects related to education policies and management and focusing on the relationship between AI and learning, the main focus is often on personalised learning when discussing AI in education (Zhang et al., 2020). Indeed, discussions about AI in education encompass various topics, such as intelligent tutors (Yilmaz et al., 2022), virtual assistants (Gubareva & Lopes, 2020), immersive and interactive learning experiences (Chng et al., 2023), and the use of data to enhance students’ performance (Vázquez-Ingelmo et al., 2021), among others. Analysing this data will enable the detection of patterns and trends, assisting teachers in the early detection of failures, and identifying areas for improvement. This, in turn, facilitates the design of more effective teaching strategies (Gašević et al., 2015).

Education in the age of AI

Discussing education in the age of AI entails reflecting on the role of education in preparing individuals for a rapidly changing world where this technology will be present in all aspects of life: work, studies, leisure, personal relationships, and more. Hence, it is crucial to understand how AI works and the benefits and risks of its use. New knowledge, skills, competencies, and values are needed for life and work in the era of AI (Bozkurt et al., 2023; Ng et al., 2022).

AI as an emerging technology that can be disruptive

Universities have been concerned about emerging technologies and their application in higher education. For example, in Spain, according to the UNIVERSITIC 2022 report (Crespo Artiaga et al., 2023), 71% of universities have designed a strategy to promote innovative teaching initiatives, and 86% analyse IT trends applicable to teaching innovation; one in three universities has evolved its learning management system into a digital learning ecosystem that facilitates personalised education. However, only 17% currently utilise adaptive learning solutions to enable experiences with a higher degree of personalisation; 30% of universities have a laboratory to analyse emerging technologies and their application to their environment; 16% have a catalogue to understand better the potential of emerging technologies to promote pilot projects for digital transformation; and 25% utilise AI to prevent cybersecurity threats, thereby improving security management. AI will substantially impact society and is considered a disruptive technology (Alier-Forment & Llorens-Largo, 2023). It is as dangerous to embrace it naively as to reject it outright ( Llorens-Largo, 2019).

Social implications of AI (beyond what it seems)

One should avoid falling into the naive belief that technology is neutral and that everything depends on the humans who develop and use it. Technology is not merely a means to an end but also shapes that end (Coeckelbergh, 2023).

Therefore, topics such as algorithmic decision-making, including its capacity for influence and manipulation (Flores-Vivar & García-Peñalvo, 2023a), biases, unfair discrimination, inequality (Holmes et al., 2022), surveillance, technical competencies, information bubbles, and exclusion (Nemorin et al., 2023), as well as the substitution of humans in posthumanism and transhumanism (Neubauer, 2021), and all their interrelationships, should be included in the debate.

All these aspects are highly significant in education because they impact human behaviour. Considering their double implication, they affect the educational realm regarding how AI can influence these aspects, just as it does in society. Additionally, education is crucial in preparing individuals to navigate a future world heavily influenced by technology. AI is one of the influential forces (Flores-Vivar & García-Peñalvo, 2023b).

Integral and strategic vision of AI

Therefore, a systematic vision is required, with a comprehensive global and multicultural approach. On November 23, 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2022). One of the areas of action addressed by the recommendation is education, recommending, among other things, “to provide adequate AI literacy education to the public on all levels in all countries in order to empower people and reduce the digital divides and digital access inequalities resulting from the wide adoption of AI systems” (p. 33). In the case of Spain, this was already reflected in strategic axis 2 (Promoting the development of digital skills, enhancing national talent, and attracting global talent in artificial intelligence) of the National Strategy on Artificial Intelligence (from the Spanish Estrategia Nacional de Inteligencia Artificial) (Gobierno de España, 2020). Therefore, it can be asserted that the concern already exists at the political level at least. So, what has transpired to prompt global discourse and analysis of the subject? The answer lies in the emergence, in November 2022, of the public release of a tool called ChatGPT by OpenAI ( https://openai.com/blog/chatgpt/). This tool enables conversational interactions where individuals can ask questions or make requests, and the system provides responses within seconds that, at first glance, are indistinguishable from those that a human expert would have offered. This development has sparked global attention and analysis due to its potential implications for various domains and its ability to simulate human-like conversation and knowledge dissemination at scale. ChatGPT is based on a large-scale advanced language model (LLM) known as Generative Pre-trained Transformer 3 (GPT-3) (Brown et al., 2020).

However, ChatGPT, as an example of a tool, and GPT-3, as an LLM, are just two specific cases (albeit ones that have caused significant technological disruption) of a broader trend in AI known as generative AI (García-Peñalvo & Vázquez-Ingelmo, 2023; van der Zant et al., 2013). Generative AI refers to the ability of AI systems to generate content such as text, images, videos, audio, and more in response to a given prompt, typically expressed in natural language text. Whereas traditionally, these systems have accepted text-based inputs, some applications can now process multimodal inputs, incorporating various forms of media to generate content.

Language models and ChatGPT (and similar): Are they really intelligent?

The initial human response often underestimates any artefact that claims to be intelligent. As a result, ChatGPT has been labelled, among many other things, as a charlatan or a sophisticated parrot, as it does not truly understand what it is talking about (Alonso, 2023). Individuals may perceive themselves as more intelligent than ChatGPT if they can demonstrate ChatGPT’s lack of knowledge in logic or mathematics, its tendency to fabricate information it does not know (referred to as hallucinations), and other similar weaknesses. All of this may have been true at some point. However, it may have been addressed with the continuous improvements being released or resolved in future versions or through integration with other plug-ins that enhance the versatility and capabilities of ChatGPT. Nevertheless, it should be remembered that ChatGPT is based on a language model that was not initially designed for many of the tasks it is being asked to perform.

The strongest attacks on generative AI are not new and can be traced back to John Searle’s counterargument to the Turing test. In Turing’s (1950) imitation game, he proposed that a machine could be considered intelligent if it could pass as a human in a blind test. However, Searle (1980) argued that if there is a room with a human who only understands Spanish, with a rule book, receives a piece of paper with indecipherable symbols (in Chinese), and follows the instructions in the book to produce another piece of paper with symbols he or she does not understand (also in Chinese), from the outside, it may appear as a conversation. Still, no one inside the room understands Chinese. Although Dennett (2017) remarked that “nature makes extensive use of the principle of minimal knowledge and designs highly capable and expert creatures, although even cunning ones who have no idea whatsoever of what they are doing or why they are doing it” (p. 299), the reality is that our understanding of how the brain works and the relationship between the human mind and consciousness is still in its infancy.

Institutional and decision-makers’ concerns

As an indication of the interest generated in university governance, in February 2023, the European University Association published its position on responsible use in university teaching, which was subsequently endorsed in Spain by Crue Universidades Españolas and the Ministry of Universities. One point stands out from its approach: the immediate consequences in the teaching and learning process, particularly in assessment, as students already use it.

EDUCAUSE has also expressed its concern and conducted a QuickPoll survey, whose main conclusion was that whereas various stakeholders in higher education are still forming opinions about generative AI, students, as well as faculty members, have already started using it in their assignments. However, most institutions do not have policies regarding its use (Muscanell & Robert, 2023).

In the same vein, the Institute for Higher Education in Latin America and the Caribbean (from the Spanish Instituto para la Educación Superior en América Latina y el Caribe) of UNESCO recommends using ChatGPT and AI with care and creativity, building capacity for understanding and management, and conducting AI audits (Sabzalieva & Valentini, 2023).

According to Informatics Europe (2023), this intense and rapid emergence in the academic world is both concerning and exciting. It raises concerns due to its short-term adverse effects on established conventions of trust and authenticity. At the same time, it generates enthusiasm for its potential to enhance human capabilities. Specifically, coding is a worrisome topic in informatics studies (Hazzan, 2023; Meyer, 2022).

Concern in academia

The topic has also sparked interest among the educational research community. There is a reasonably unanimous position that the way forward is not to ignore or prohibit ChatGPT or similar applications but rather to train teachers and students in their proper and ethical use. Furthermore, curriculum revisions are necessary to prioritise critical thinking and maximise the benefits of these tools (García-Peñalvo, 2023). An approach is proposed that builds trusting relationships with students, with a pedagogical design centred around people, where assessment becomes an integral part of the learning process rather than solely controlling activities (Rudolph et al., 2023).

“The emperor goes naked”

Whenever a promisingly disruptive technology emerges, it is accompanied by extreme discourses and positions, both technophilic and technophobic. Chomsky et al. (2023) argued that generative AI “will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge”, whereas Bill Gates (2023) asserted that “the development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone”.

In this crossfire of positions and blame allocation, when the focus is placed on the consequences of applying generative AI in education, it becomes evident that ChatGPT is being held responsible for weaknesses in current educational practices that already existed but were reluctant to be acknowledged. The education system, particularly the current university system, is designed for a world with a scarcity of information, where individuals attend educational institutions during their formative years to acquire and store knowledge for future use when needed. Now, however, we live in a society with immediate and on-demand access to abundant information, including truths or tautologies, half-truths or indeterminations, and falsehoods or contradictions. Educational institutions, particularly universities, continue to uphold their commitment to society regarding knowledge creation, transmission, and preservation. However, the question remains whether they can effectively respond to the challenge posed by the arrival of “intelligent” applications that have caused a significant informational earthquake while still in their early stages.

This article aims to confront the naked emperor with the mirror of reality, a reality that has many facets and opportunities but cannot be denied or prohibited. Every effort should be made to understand its possibilities and limitations to educate all stakeholders in the educational system and incorporate these applications that use generative AI into teaching and learning processes as efficiently and effectively as possible. To achieve this, first, a mapping will be carried out, including both the initial tools that have emerged, dominating the market and with potential use in education, and the reactions within academia during the first six months since the emergence of ChatGPT, which has popularised generative AI.

IMPLICATIONS AND USES OF GENERATIVE AI IN EDUCATION

To use a specific technology in teaching and learning processes with informed decision-making, it is crucial to understand its possibilities and limitations without being swayed by extremes, which tend to be particularly biased when a potentially disruptive trend emerges, as has been the case with generative AI, which has experienced rapid penetration. Therefore, before discussing the implications of this technology in the educational context, a prospective study will be conducted focusing on the tools already available with potential educational uses and the contributions that have emerged from academia in the firsts six months since the tsunami caused by ChatGPT.

Educational tools based on generative AI technologies

Generative AI aims at content generation. The language models used for this purpose are trained to determine which elements are more likely to appear near others. To generate their responses, they evaluate large data corpora, allowing them to provide answers that fall within a certain probability range based on the training corpus. This means the responses are generated without explicit reasoning, so although they may be coherent, they are not always correct. This characteristic should be considered in any context, but especially in the educational uses of these tools.

The number of software tools incorporating some form of intelligent features increased in 2022 and, in an exponential manner, in 2023. This is primarily due to the development of LLMs (Gruetzemacher & Paradice, 2022), where the concept of “large” expands alongside advancements in AI. These models are trained on extensive knowledge bases and utilising significant computing power.

LLMs have taken the forefront due to the popularity of the Generative Pre-trained Transformer (GPT) (Brown et al., 2020; OpenAI, 2023), which, whether in version 3.5 (2022) or 4 (2023), powers ChatGPT. However, GPT is just one of the many existing LLMs based on the Transformer architecture (Vaswani et al., 2017). There are various language model proposals, ranging from earlier models that may not be considered large scale by today’s standards, such as BERT (Bidirectional Encoder Representations from Transformers, 2018) ( Devlin & Chang, 2018; Devlin et al., 2019) or T5 (Text-To-Text Transfer Transformer, 2019) (Raffel et al., 2020), to more recent ones, such as LaMDA (Language Model for Dialogue Applications, 2021) (Adiwardana, 2020; Collins & Ghahramani, 2021; Thoppilan et al., 2022), Chinchilla (2022) (Hoffmann et al., 2022), Bard (2023) (Pichai, 2023), LLaMA (Large Language Model Meta AI, 2023) (Touvron et al., 2023), Titan (2023) (Sivasubramanian, 2023), or Lima (Zhou et al., 2023), among many others (Yang et al., 2023; Zhao et al., 2023).

These language models enable natural language conversation by generating high-quality responses. However, closed-source code and the high costs associated with development and training pose significant barriers, even for the most influential companies in the industry.

To address these challenges, an increasing number of solutions are emerging from the open-source software community. For instance, Alpaca (2023) (Taori et al., 2023) refines LLaMA (Touvron et al., 2023), making it a more accessible and replicable model. By starting with 175 human-written instruction–output pairs, Alpaca leverages GPT-3.5 to expand the training data to 52K through self-supervision, resulting in a performance similar to that of GPT-3.5. Despite Alpaca’s effectiveness, the full-scale fine-tuning of LLaMA is time consuming and computationally expensive, does not support multimodality, and is challenging to transfer to different subsequent scenarios. However, pathways have been opened for methods to design lightweight and customised models that can be trained on lower-end computing devices quickly. One example is LLaMA-Adapter (Zhang et al., 2023), which introduces 1.2 million parameters to the frozen LLaMA 7B model and can be fine-tuned in less than an hour using 8 A100 GPUs.

The intention is not to provide an exhaustive list of these tools, as there are already interesting resources that are frequently updated (Agarwal, 2023; Ebrahimi, 2023), as well as AI tool directories such as Futurepedia (https://www.futurepedia.io/) or All Things AI (https://allthingsai.com/), which aim to reflect this overwhelming evolution. The objective is instead to categorise the tools that have potential educational use and to select some representatives of this generative approach that are starting to stand out, due to either their acceptance in production environments or their potential for future advancements, even though they may still be in an early prototyping phase.

Regardless of the increasing number of multimodal tools that transform various types of inputs into different types of outputs, to understand the current landscape concerning the educational context, they can be classified into tools that primarily generate text, images, videos, 3D objects, audio, source code, and tools for detecting AI-generated text. Within each category, different functional subcategories have been established (see Table 1).

Table 1
Classification of generative AI tools with potential educational use

Text generation Chatbot ChatGPT, ChatSonic, Claude
Content creation Jasper, Notion
Exam generator Conker. Monic
Language teaching Twee
Office tools Google Workspace, Microsoft 365 Copilot
Paraphrasing text Quillbot
Personal curriculum builder Resume Builder
Search engine Microsoft Bing, Perplexity, You
Research support ChatPDF, Consensus, Elicit, Humata, Klavier, SciSpace Copilot, Scite Assistant, Trinka
Image generation Graph generator GraphGPT
Image generator Adobe Firefly, Bing Image Creator, Craiyon, DALL·E 2, Deep Dream Generator, Dream by Wombo, Leap, Midjourney, NightCafe, Stable Diffusion Online, Starryai, Stockimg, Visual ChatGPT
Presentation generator ChatBA, Decktopus, GPT for Slides, SlidesAI
Video generation Video generation Fliki, Gencraft, Imagen video, Make a video
Video-to-text converter YoutubeDigest
3D object generation 3D object generation AICommand, DreamFusion, GET3D, Imagine 3D
Audio generation Audio generator AudioLM, Lovo, Murf.ai, Voicemaker
Voice modulator Voicemod
Voice-to-text converter Otter, Transkriptor
Source code generation Code debugging Adrenaline, Code GPT
Code generator Amazon CodeWhisper, Codeium, Ghostwriter, Github copilot, Text2SQL
AI-generated text detection AI-generated text detection AI Text Classifier, GPTZero
Anti-plagiarism Turnitin
Source: own elaboration.

After this process of reviewing and classifying some of the existing tools that make use of generative AI techniques, several reflections can be presented:

  1. 1. There is a vast array of tools that utilise generative AI. The spectrum of their use is broad, and many of them can be used for educational and/or learning purposes.
  2. 2. Many of these tools have freemium models, although the free versions often come with limited features and capabilities.
  3. 3. Text generation and writing assistants have a significant niche, and new tools in this area emerge frequently.
  4. 4. Support for automatic translation of texts is a functionality that is highly developed, which has significant implications for scientific writing as well as language teaching and learning.
  5. 5. A revolution in the concept of search engines, as we know them today, is anticipated. Syntax-based search services will be complemented by searches and report generation through the integration of generative AI techniques into search engines, as has already happened with Microsoft Bing, and it is expected that Google’s response will be accessible in all geographic areas. Major tech companies are redirecting their business models towards this sector. ChatGPT itself, incorporating plug-ins, can base its responses on content obtained from the web, not just its knowledge base.
  6. 6. There are many services available for the transformation of text into images, and high-quality results are already being achieved. However, the transformation of text into other media, such as video or 3D objects, has promising proposals but is still embryonic and has not yet materialised into reliable applications accessible to end users.
  7. 7. The advancement in language models is resulting in the emergence of applications that can transform text into multimodal elements.
  8. 8. Integrating language models into everyday applications is a significant focus of office ecosystems aiming to enhance productivity in their usage. The integration of PaLM into Google Docs (Google Workspace) (Kurian, 2023) and the integration of GPT into Microsoft Office (Microsoft 365 Copilot) are notable examples of this trend.
  9. 9. Despite the increasing number of promising products, one aspect where there has yet to be as much success is detecting AI-generated texts. Text detection tools exhibit significant limitations, as acknowledged by OpenAI regarding its AI Text Classifier, which is notably unreliable for texts shorter than 1,000 characters (written in English) (Kirchner et al., 2023). Efforts have been made to address this issue by incorporating model-specific signatures into generated texts or applying watermarking techniques that imprint specific patterns on the texts. However, empirical and theoretical evidence has shown that detectors are unreliable in practical scenarios (Sadasivan et al., 2023). In other words, for a sufficiently advanced language model, even the best possible detector can only slightly outperform a random classifier. Therefore, caution must be exercised when using these tools to make decisions, such as evaluating academic works, as false positives can lead to significant reputational damage for the authors.

Rapid review: Generative AI in education

After gaining a comprehensive understanding of the tools, it is important to examine how the arrival of these technologies is being perceived within the educational community, but from an academic perspective. To achieve this, a rapid review ( Grant & Booth, 2009) of the literature was conducted to assess what has been published on this particular topic, using systematic review methods to search for and critically evaluate existing research.

ChatGPT has also significantly impacted scientific production in the early stages of 2023. For example, as of April 1, 2023, there were a total of 194 articles mentioning ChatGPT on arXiv, with a primary focus on natural language processing but with research potential in other fields, including education (Liu et al., 2023). In the specific case of the conducted review, articles published and collected in Web of Science and Scopus were used as references (as of April 6, 2023). These databases are considered the most widely used and internationally accepted, as they include articles that have undergone rigorous review processes, which are not present in other databases, such as the aforementioned arXiv.

Definition of the review protocol

Concisely, the review protocol (García-Peñalvo, 2022) followed in the study is as follows:

  1. 1. Research questions:
    1. RQ1. What are the characteristics of the items in the final review corpus (year of publication, type of article, sources, authors, geographical distribution)?
    2. RQ2. In which disciplinary domains were the studies developed?
    3. RQ3. What are the contributions of generative AI to education?
    4. RQ4. What educational interventions are reflected or proposed in the studies?
  2. 2. Inclusion and exclusion criteria:
    1. IC1. The article must be published or accepted for publication and accessible in early access.
    2. IC2. The article should not be a note or a letter.
    3. IC3. The article must be written in English.
    4. IC4. The full text of the article must be accessible.
    5. IC5. The article is related to the use of generative AI in education.
    6. IC6. The article provides an educational experience or a reflection on the educational use of generative AI.
    7. IC7. The article is not a version of an article already included in the corpus.
    8. EC1. The article is not published or accepted for publication and is accessible in early access.
    9. EC2. The article is a note or a letter.
    10. EC3. The article is not written in English.
    11. EC4. The full text of the article is not accessible.
    12. EC5. The article is not related to the use of generative AI in education.
    13. EC6. The article does not provide an educational experience or a reflection on the educational use of generative AI.
    14. EC7. The article is a version of another article already included in the corpus and therefore does not add anything new to the article already included.
  3. 3. Data sources:
    1. Web of Science and Scopus.
  4. 4. Canonical search equation:
    1. Education AND (“generative artificial intelligence” OR “generative AI” OR “chatgpt”).

Results of the review

The PRISMA diagram (Page et al., 2021) depicted in Figure 1 summarises the search and filtering process of the obtained results, and Table 2 presents the studies included in the final corpus of the review.

Figure 1
Workflow of the rapid review. Source: Own elaboration
Workflow of the rapid review. Source: Own elaboration
Source: own elaboration.

RQ1. What are the characteristics of the items in the final review corpus (year of publication, type of article, sources, authors, geographical distribution)?

The selected articles are predominantly concentrated in the year 2023 (approximately 84%), which clearly indicates the novelty of applying generative AI to education. Journal articles predominate, with 71% (87% if editorial papers published in journals are also considered). Editorial articles provide a reflective and critical perspective from the editorial teams of scientific journals on the emerging nature of generative AI. There is a wide dispersion in the sources where the articles were published. Among the 26 represented sources, only one has three articles (Journal of University Teaching and Learning Practice), followed by three sources that have published two selected articles (AAAI Conference on Artificial Intelligence. Computers and Education: Artificial Intelligence. PLOS Digital Health). As with the sources, there is no significant number of articles among a few authors. Among the 169 authors who contributed to the selected articles, only two female authors participated in three articles (Safinah Ali and Cynthia Breazeal), and one female author contributed to two articles (Daniella DiPaola). Regarding geographical distribution, the articles are predominantly from the United States, represented in 15 articles, followed by Australia, represented in eight articles.

RQ2. In which disciplinary domains were the studies developed?

The field of education, understood as a cross-disciplinary field, is the most represented in the corpus of the review, with nine articles (29%), followed by medicine, with eight articles (25.8%). The organisation into disciplinary fields can be seen in Table 3.

Table 3
Disciplines on which the studies included in the final review corpus focus

Discipline Num. of papers Papers
Crafts 1 [31]
Biology 1 [27]
Science 1 [5]
Education 9 [6] [7] [8] [10] [11] [19] [23] [28] [30]
Nursing 1 [4]
Writing 2 [14] [25]
Informatics 1 [9]
Mathematics 1 [15]
Medicine 8 [3] [12] [16] [17] [18] [21] [22] [26]
Dentistry 1 [29]
Journalism 1 [24]
Pre-university 3 [1] [2] [20]
Tourism 1 [13]
Source: own elaboration.

RQ3. What are the contributions of generative AI to education?

The most widespread perception about generative AI-based technologies in education is a mix of enthusiasm and apprehension. This stance is well reflected in the four paradoxes expressed by Lim et al. (2023) regarding generative AI:

  1. 1. Generative AI is a “friend” yet a “foe”.
  2. 2. Generative AI is “capable” yet “dependent”.
  3. 3. Generative AI is “accessible” yet “restrictive”.
  4. 4. Generative AI gets even more “popular” when “banned”.

The benefits, risks, and challenges in the papers that comprise the review corpus are presented on the following basis.

Benefits and potential uses of generative AI in education:

  1. B1. Access to a large amount of relevant information in real time to later process, summarise, and present as if it were a human [4] [6] [16] [24].
  2. B2. Generation of extensive sets of educational content (cases, units, rubrics, questionnaires, etc.) that can preserve privacy in critical cases, such as in the domain of medical education [3] [5] [16] [17] [24] [26].
  3. B3. Supportive tools for learning new concepts compared to traditional media, including the ability to summarise or explain complex concepts [5] [16].
  4. B4. Understanding context, enabling interaction (dialogue) with these tools, which can help obtain self-directed answers to questions and learn more effectively about various topics [4] [12].
  5. B5. Enhancing critical thinking and creativity by allowing students to receive feedback on their assignments and question their beliefs [7] [26] [31].
  6. B6. Supporting students in repetitive tasks, allowing them to focus on the essence of the tasks and be more critical in their learning [7] [13].
  7. B7. Facilitating the initial development of ideas and reflection upon them [7] [13].
  8. B8. Providing an asynchronous communication platform, which increases engagement and facilitates student collaboration [6].
  9. B9. Allowing for personalised learning [6] [12] [16] [18] [26].
  10. B10. Helping students with writing difficulties and, in general, anyone to have more control over their writing skills [5] [7].
  11. B11. Becoming virtual learning assistants [7] [10] [18].
  12. B12. Serving as tools for continuous and informal learning [4].
  13. B13. Facilitating the development of language skills [6] [16].
  14. B14. Improving teachers’ productivity by reducing the time spent answering the same student questions, grading written assignments, etc., allowing them to focus on higher-level tasks, such as providing feedback and support to students [4] [6] [8] [13] [16] [18].
  15. B15. Supporting automated assessment and other innovations in evaluation [6] [16].

Risks of generative AI in education:

  1. R1. Rapid and superficial learning [8].
  2. R2. Hindering students from developing critical and independent thinking skills, which could have long-term repercussions [4] [8] [9] [13] [18].
  3. R3. Potential hindrance to the development of creativity [8] [13].
  4. R4. Providing incomplete information, leading to the misinterpretation of a concept [4] [7] [10] [24] [26].
  5. R5. Offering seemingly plausible but incoherent answers, often producing “fabricated” results known as hallucinations [14] [16] [18] [26] [27] [31].
  6. R6. Limitations for interpreting quantitative information embedded in a text [15].
  7. R7. In many cases, no information being provided about the authorship or the source of evidence supporting the obtained results, which also constitutes a violation of copyright [5] [6] [10] [11] [14] [26] [31].
  8. R8. Possible adverse effects on developing interpersonal skills, such as communication and interaction between students and teachers and among peers being compromised [4] [18].
  9. R9. Dishonest use of these tools, which occurs when the output generated is used without proper attribution, which can be considered plagiarism [4] [6] [8] [10] [11] [13] [16] [18] [25] [26] [29].
  10. R10. The differential access and usage of these tools, particularly premium paid versions, between individuals who can afford them and those who cannot, which is a potential cause of equity issues [4] [6] [13] [19].
  11. R11. The invasion of data privacy and confidentiality [13] [18].
  12. R12. An increase in racial and socio-economic prejudices due to data biases in the training of these applications [13] [18] [29] [31].
  13. R13. Potential negative environmental impact due to the high processing power required to obtain the results [5].
  14. R14. Cybersecurity problems [26].

Challenges that generative AI opens up for education systems:

  1. C1. Adaptation of all actors involved to the digital ecosystem derived from generative AI, which is continuously evolving [8] [19].
  2. C2. Teacher training in generative AI competencies [4] [5] [6] [10] [11] [30].
  3. C3. Communities of practice generation to share experiences on the educational use of AI [4].
  4. C4. Development of students’ competencies in generative AI, with an emphasis on fostering critical thinking skills to understand its potential and limitations and to make ethical use of these technologies [4] [10] [18] [30].
  5. C5. Reviewing, updating, and innovating curriculum content and teaching methods that may have become outdated, along with addressing the resistance to change, opening up more opportunities for students’ reflection [10] [11] [13] [14] [18] [22] [24] [26] [28] [29] [30].
  6. C6. Exploration of alternatives and/or complementarities in assessment methods, such as incorporating oral assessments as a complement to written assignments, utilising open-ended evaluations to encourage originality and creativity, providing visual diagrams or graphics, and emphasising the importance of the learning process rather than solely focusing on the final product [4] [6] [7] [8] [10] [18].
  7. C7. Development of ethical codes and the establishment of general guidelines regarding generative AI, ensuring responsible and ethical practices in its implementation [8] [19] [21] [26].

RQ4. What educational interventions are reflected or proposed in the studies?

The selected works encompass various educational interventions implemented or proposed as measures to address the widespread adoption of generative AI technologies. These interventions are distinguished in the classroom setting and in formulating educational strategies and policies.

Work in the classroom

The initial works (from a temporal perspective) that apply generative AI techniques in the classroom refer to the design of activities to bring generative adversarial networks (GANs) (Goodfellow et al., 2020; Karras et al., 2021) into pre-university studies. Ali, DiPaola, and Breazeal (2021) designed educational activities to help students understand the concepts of generators and discriminators concerning GANs. The generator aims to create something new, and the discriminator must classify it as real or fake. These activities aimed to provide students with a better understanding of how GANs function and their role in generative AI. Ali, DiPaola, Lee, et al. (2021) employed educational activities with high school students to raise awareness about the ethical aspects of AI using tools for deepfake generation. The purpose was to engage students in critical discussions and reflection regarding the ethical implications of using AI technology, specifically in the context of deepfakes. These activities aimed to foster a deeper understanding of AI generative tools’ potential risks and ethical considerations. Pataranutaporn et al. (2022) investigated the effects of artificially generated virtual instructors on learning. In their study, they explored the correlation between students’ affinity for the virtual instructor and their motivation levels, although no significant impact on test scores was observed. Lyu et al. (2022) introduced AI at the pre-university level through interactive tools, Plato’s allegory of the cave, and artistic explorations. However, instead of GANs, the authors utilised variational autoencoders (Kingma & Welling, 2022) as the underlying technique.

GANs also have practical applications in training future medical professionals by quickly generating training materials and simulations that can be used as educational resources (Arora & Arora, 2022). These synthetic cases would increase the number of extreme cases with which to create learning scenarios while protecting patient identities. An additional benefit is facilitating resource sharing (cases) between institutions.

Dwivedi et al. (2023) proposed that educators and students explore together the applications and limits of generative AI, both ethical and capacity-wise, thus enabling this technology in unimaginable ways. To do so, they suggested using the IT Mindfulness framework (Thatcher et al., 2018), which includes four elements: 1) alertness to distinction, 2) awareness of multiple perspectives, 3) openness to novelty, and 4) orientation in the present.

Šlapeta (2023) reflected on how to start using generative AI tools in the classroom. It begins by establishing their limits, with the idea that they are models, not sources of wisdom or absolute truth. Any assistant helps to organise thoughts and tasks into meaningful chunks. However, suppose one wants to incorporate this technology into a daily routine. In that case, they must learn to use it so that the user becomes the expert and, as such, is responsible for verifying the results provided by the assistant.

Defining education strategies and policies

The use of generative AI tools in educational institutions should not be prohibited (Choi et al., 2023; García-Peñalvo, 2023; Iskender, 2023; Lim et al., 2023; Perkins, 2023; Tlili et al., 2023). Prohibition is the best indication that educational institutions are not yet prepared for the natural incorporation of these technologies. Moreover, these tools are already available to students outside educational institutions and will be commonplace in their workplaces after they have completed their studies (Masters, 2023). Therefore, teachers need to feel supported by their institutions’ administrations and clearly understand the expectations associated with these tools throughout the teaching and learning process, with particular attention to assessment practices (Cooper, 2023).

To curb fraudulent or unethical use, which is not a new issue but has become more prevalent with the advent of these technologies, teachers must enhance their role in raising student awareness about the importance of academic honesty, the value of critical thinking, and the consequences of dishonest practices (Choi et al., 2023; Dwivedi et al., 2023). They should assume a more significant leadership role (Crawford et al., 2023) and shift the narrative towards distributed responsibility when addressing academic misconduct. In other words, the government, teachers, and students must share responsibility ( Lim et al., 2023). However, promoting good practices should not mean relinquishing the need for academic fraud detection, as undetected cheating represents a form of inequality in the present and a lack of preparedness for the future. Additionally, given the global nature of these tools, international coordination is necessary to maximise their benefits (Dwivedi et al., 2023).

REFLECTION

AI is a set of information processing tools representing a further step in the advancements made in this field over the past century. AI enables the processing of information in a helpful way for humans due to its speed and alignment with objectives. What is most remarkable about recent developments, and likely has the most significant impact on education, is a subset of AI known as generative models. The commercial and collaborative strategies surrounding these models have allowed millions worldwide to interact with them, making people globally aware of the possibilities they offer and the potential risks involved.

Although AI systems have been dedicated to various tasks, such as image or video creation and manipulation, those offering natural language processing capabilities have had the greatest impact. This is our first time witnessing such dimension and quality in this area. We are no longer impressed by machines’ ability to handle numbers, which was controversial in the past. We have become accustomed to robots or drones that many now have in their homes. However, when it comes to language processing, something considered part of human beings’ essence, it raises concerns. It is important to note that the fact that a machine can handle language like a human can only makes it human if we perceive it as such.

A similar line of thought has emerged regarding the suffering and feelings of love in other living beings. Qualities exclusively attributed to human beings for centuries are now acknowledged to exist in other living beings as well. In a further display of human inconsistency, we may wonder whether language-processing machines have feelings, as has been suggested in some cases, while we continue to doubt whether other living beings possess them. The difference between these two cases is that machines handle the same language as humans, allowing people to ask them questions directly and receive responses. In contrast, other living beings use different forms of communication with which most humans cannot interact. It does not seem this should be the criterion that makes the difference.

Not everything imaginable with AI is currently a reality. Each emerging option must be carefully analysed and placed within the appropriate utility framework. There is not a single AI; many different models are trained in different ways to perform specific tasks. Furthermore, the outputs are used by other types of software that generate functions no longer exclusive to AI. If resistance can jeopardise the use of such powerful tools we are discussing, the questions must align with the facts so that the answers can provide real options.

There is no relevant example of access to a technology being successfully prohibited. Access to the Internet or smart mobile devices has also sparked numerous debates, with clear proposals to limit access by specific individuals based on age or condition. However, none of these initiatives has been successful. This is mainly because the focus is more on limiting access than on users’ ability to filter information and utilise the available tools. This is a phenomenon that is not new either, as unrestricted access to information has always generated debates.

In this article, we do not intend to answer the question of whether tools such as ChatGPT are intelligent. What seems to be beyond doubt is that they perform functions that, when performed by a human being, reflect the intelligence of the subject using them. We have no problem saying that similar models designed to play chess are extremely intelligent (to the point that humans do not compete against them in tournaments). Why do we now have these reservations? Perhaps it is only because the other tasks are ones that not all humans perform (playing chess) or ones that are of little interest. However, language use concerns everyone; we all believe we understand what we discuss (never better said).

While organisations and institutions debate the stance they should take or recommend regarding using these AI technologies in education, technology will advance enough to render those resolutions meaningless once approved. It is crucial to remember this to avoid dedicating time to rules or recommendations that are impossible to enforce.

The problem for the academic world is that machines can easily and quite accurately (perhaps totally correctly in a few months) perform tasks that have been assigned to students as a mechanism to determine if they have achieved the objectives of a subject (i.e., for their assessment). The initial reaction to this situation is to prevent them from using these tools. However, the question now becomes more interesting: If a machine can already perform a task, what other things could humans begin to do with the assistance of these machines? We faced similar questions during the 18th and 19th centuries with the Industrial Revolution, when machines replaced manual labour in numerous factories. Now, we can do the same things much faster, and even better, which enables us to consider doing different things. Whether this leads to the betterment of our societies or not will be our responsibility.

The problems we have presented here are not unique to the field of education. They also affect other sectors, such as law, medicine, engineering, and programming. They will impact any task that requires the rapid handling of large amounts of information in databases or, as a great novelty, in texts. Education should not be exempt from these debates, and we must pay attention to the options these tools give us to maximise the learning possibilities for both teachers and students. Perhaps we need to make certain changes in the curricula of our degree programs by incorporating the learning of these tools’ usage in each field. However, what is certain is that we will have to make substantial changes in how we teach and what we expect our students to do.

Finally, in writing this article, we have realised our main challenge: the speed at which we will have to analyse and incorporate these innovations. In the time it took us to write this and address the reviewers’ suggestions, more and more options have emerged. This will be the true challenge from now on.

REFERENCES

Adiwardana, D. (2020, January 28th). Towards a Conversational Agent that Can Chat About…Anything. Google. http://bit.ly/3YAYGpm

Agarwal, G. (2023). AI Tool Master List. https://doi.org/10.1007/s43681-022-00147-7

Ali, S., DiPaola, D., & Breazeal, C. (2021). What are GANs?: Introducing Generative Adversarial Networks to Middle School Students. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (pp. 15472-15479). https://doi.org/10.1609/aaai.v35i17.17821

Ali, S., DiPaola, D., Lee, I., Sindato, V., Kim, G., Blumofe, R., & Breazeal, C. (2021). Children as creators, thinkers and citizens in an AI-driven future. Computers and Education: Artificial Intelligence, 2, Article 100040. https://doi.org/10.1016/j.caeai.2021.100040

Alier-Forment, M., & Llorens-Largo, F. (2023). EP-31 Las Alucinaciones de ChatGPT con Faraón Llorens In Cabalga el Cometa. https://bit.ly/3ZCNBVT

Alonso, C. (2023, 19 de abril). ¡Ojo con ChatGPT, que es un charlatán mentirosillo! El futuro está por hackear. https://bit.ly/44dEbCk

Arora, A., & Arora, A. (2022). Generative adversarial networks and synthetic patient data: current challenges and future perspectives. Future Healthcare Journal, 9(2), 190-193. https://doi.org/10.7861/fhj.2022-0013

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., … Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv. https://doi.org/10.48550/arXiv.2005.14165

Chng, E., Tan, A. L., & Tan, S. C. (2023). Examining the Use of Emerging Technologies in Schools: a Review of Artificial Intelligence and Immersive Technologies in STEM Education. Journal for STEM Education Research, In Press. https://doi.org/10.1007/s41979-023-00092-y

Choi, E. P. H., Lee, J. J., Ho, M. H., Kwok, J. Y. Y., & Lok, K. Y. W. (2023). Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Education Today, 125, Article 105796. https://doi.org/10.1016/j.nedt.2023.105796

Chomsky, N., Roberts, I., & Watumull, J. (2023, March 8th). The False Promise of ChatGPT. The New York Times. http://bit.ly/3GycXfx

Coeckelbergh, M. (2023). La filosofía política de la inteligencia artificial. Una introducción. Cátedra.

Collins, E., & Ghahramani, Z. (2021, May 18th). LaMDA: our breakthrough conversation technology. Google. http://bit.ly/3I5udIZ

Cooper, G. (2023). Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. Journal of Science Education and Technology, 32, 444-452. https://doi.org/10.1007/s10956-023-10039-y

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, In Press. https://doi.org/10.1080/14703297.2023.2190148

Crawford, J., Cowling, M., & Allen, K. A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artifiicial intelligence (AI). Journal of University Teaching and Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02

Crespo Artiaga, D., Ruiz Martínez, P. M., Claver Iborra, J. M., Fernández Martínez, A., & Llorens Largo, F. (Eds.). (2023). UNIVERSITIC 2022. Análisis de la madurez digital de las universidades españolas en 2022. Crue Universidades Españolas. https://bit.ly/3n60tp3

Dennett, D. (2017). De las bacterias a Bach. La evolución de la mente. Pasado & Presente.

Devlin, J., & Chang, M.-W. (2018, November 2nd). Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing. Google. http://bit.ly/3Ebwrpi

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint, Article 1810.04805. https://doi.org/10.48550/arXiv.1810.04805

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, Article 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Ebrahimi, Y. (2023). 1000 AI collection tools. http://bit.ly/3YOjkSK

European University Association (2023). Artificial intelligence tools and their responsible use in higher education learning and teaching. European University Association. https://bit.ly/3Hq2ROf

Finnie-Ansley, J., Denny, P., Luxton-Reilly, A., Santos, E. A., Prather, J., & Becker, B. A. (2023). My AI Wants to Know if This Will Be on the Exam: Testing OpenAI's Codex on CS2 Programming Exercises. In ACE '23: Proceedings of the 25th Australasian Computing Education Conference (pp. 97-104). ACM. https://doi.org/10.1145/3576123.3576134

Flores-Vivar, J. M., & García-Peñalvo, F. J. (2023a). La vida algorítmica de la educación: Herramientas y sistemas de inteligencia artificial para el aprendizaje en línea. In G. Bonales Daimiel & J. Sierra Sánchez (Eds.), Desafíos y retos de las redes sociales en el ecosistema de la comunicación, (Vol. 1, pp. 109-121). McGraw-Hill.

Flores-Vivar, J. M., & García-Peñalvo, F. J. (2023b). Reflections on the ethics, potential, and challenges of artificial intelligence in the framework of quality education (SDG4). Comunicar, 31(74), 35-44. https://doi.org/10.3916/C74-2023-03

García-Peñalvo, F. J. (2022). Developing robust state-of-the-art reports: Systematic Literature Reviews. Education in the Knowledge Society, 23, Article e28600. https://doi.org/10.14201/eks.28600

García-Peñalvo, F. J. (2023). The perception of Artificial Intelligence in educational contexts after the launch of ChatGPT: Disruption or Panic? Education in the Knowledge Society, 24, Article e31279. https://doi.org/10.14201/eks.31279

García-Peñalvo, F. J., & Vázquez-Ingelmo, A. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, In Press.

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64-71. https://doi.org/10.1007/s11528-014-0822-x

Gašević, D., Siemens, G., & Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. Computers and Education: Artificial Intelligence, 4, Article 100130. https://doi.org/10.1016/j.caeai.2023.100130

Gates, B. (2023, March 21). The Age of AI has begun. GatesNotes. http://bit.ly/3nZjFF4

Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Medical Education, 9, Article e45312. https://doi.org/10.2196/45312

Gobierno de España (2020). ENIA: Estrategia Nacional de Inteligencia Artificial. Ministerio de Asuntos Económicos y Transformación Digital. https://bit.ly/3oHHUb0

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Commun. ACM, 63(11), 139-144. https://doi.org/10.1145/3422622

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26(2), 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Gruetzemacher, R., & Paradice, D. (2022). Deep Transfer Learning & Beyond: Transformer Language Models in Information Systems Research. ACM Computing Surveys, 54(10s). https://doi.org/10.1145/3505245

Gubareva, R., & Lopes, R. P. (2020). Virtual Assistants for Learning: A Systematic Literature Review. In H. Chad Lane, S. Zvacek, & J. Uhomoibhi (Eds.), Proceedings of the 12th International Conference on Computer Supported Education (CSEDU 2020) (Online, May 2 - 4, 2020) (Vol. 1, pp. 97-103). SCITEPRESS. https://doi.org/10.5220/0009417600970103

Hazzan, O. (2023, January 23). ChatGPT in Computer Science Education. BLOG@ACM. http://bit.ly/3WYTxpv

Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., Driessche, G. v. d., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., … Sifre, L. (2022). Training Compute-Optimal Large Language Models. arXiv, Article arXiv:2203.15556v1. https://doi.org/10.48550/arXiv.2203.15556

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education, 32, 504-526. https://doi.org/10.1007/s40593-021-00239-1

Informatics Europe (2023). AI in Informatics Education (Position paper by Informatics Europe and the National Informatics Associations). Informatics Europe.

Iskender, A. (2023). Holy or Unholy? Interview with Open AI’s ChatGPT. European Journal of Tourism Research, 34, Article 3414. https://doi.org/10.54055/ejtr.v34i.3169

Johinke, R., Cummings, R., & Di Lauro, F. (2023). Reclaiming the technology of higher education for teaching digital writing in a post-pandemic world. Journal of University Teaching and Learning Practice, 20(2), Article 01. https://doi.org/10.53761/1.20.02.01

Karaali, G. (2023). Artificial Intelligence, Basic Skills, and Quantitative Literacy. Numeracy, 16(1), Article 9. https://doi.org/10.5038/1936-4660.16.1.1438

Karras, T., Laine, S., & Aila, T. (2021). A Style-Based Generator Architecture for Generative Adversarial Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4217-4228. https://doi.org/10.1109/TPAMI.2020.2970919

Khan, R. A., Jawaid, M., Khan, A. R., & Sajjad, M. (2023). ChatGPT-Reshaping medical education and clinical management. Pakistan Journal of Medical Sciences, 39(2), 605-607. https://doi.org/10.12669/pjms.39.2.7653

Kingma, D. P., & Welling, M. (2022). Auto-Encoding Variational Bayes. arXiv, Article arXiv:1312.6114v11. https://doi.org/10.48550/arXiv.1312.6114

Kirchner, J. H., Ahmad, L., Aaronson, S., & Leike, J. (2023, January 31). New AI classifier for indicating AI-written text. https://bit.ly/3rbXJYI

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepano, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), Article e0000198. https://doi.org/10.1371/journal.pdig.0000198

Kurian, T. (2023, March 14th). The next generation of AI for developers and Google Workspace. The Keyword. http://bit.ly/3mUo0sx

Lee, H. (2023). The rise of ChatGPT: Exploring its potential in medical education. Anatomical sciences education, In Press. https://doi.org/10.1002/ase.2270

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. International Journal of Management Education, 21(2), Article 100790. https://doi.org/10.1016/j.ijme.2023.100790

Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., Liu, Z., Wu, Z., Zhu, D., Li, X., Qiang, N., Shen, D., Tianming Liu, & Ge, B. (2023). Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models. arXiv, Article arXiv:2304.01852v3. https://doi.org/10.48550/arXiv.2304.01852

Llorens-Largo, F. (2019, 13/02/2019). Las tecnologías en la educación: características deseables, efectos perversos. Universídad. https://bit.ly/3SxO72D

Lyu, Z., Ali, S., & Breazeal, C. (2022). Introducing Variational Autoencoders to High School Students. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (pp. 12801-12809). https://doi.org/10.1609/aaai.v36i11.21559

Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Parli, V., Shoham, Y., Wald, R., Jack Clark, & Perrault, R. (2023). The AI Index 2023 Annual Report. http://bit.ly/3KBVCFa

Masters, K. (2023). Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Medical Teacher, 45(6), 574-584. https://doi.org/10.1080/0142159X.2023.2186203

Mbakwe, A. B., Lourentzou, I., Celi, L. A., Mechanic, O. J., & Dagan, A. (2023). ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS digital health, 2(2), Article e0000205. https://doi.org/10.1371/journal.pdig.0000205

Meyer, B. (2022, December 23). What Do ChatGPT and AI-based Automatic Program Generation Mean for the Future of Software. BLOG@CACM. https://bit.ly/3LyAJLj

Muscanell, N., & Robert, J. (2023). EDUCAUSE QuickPoll Results: Did ChatGPT Write This Report? EDUCASE Review. https://bit.ly/44o0uWj

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38-51. https://doi.org/10.1080/17439884.2022.2095568

Neubauer, A. C. (2021). The future of intelligence research in the coming age of artificial intelligence – With a special consideration of the philosophical movements of trans- and posthumanism. Intelligence, 87, Article 101563. https://doi.org/10.1016/j.intell.2021.101563

Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2022). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies, In Press. https://doi.org/10.1007/s10639-022-11491-w

OpenAI. (2023). GPT-4 Technical Report. arXiv, Article arXiv:2303.08774v3. https://doi.org/10.48550/arXiv.2303.08774

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372, Article n71. https://doi.org/10.1136/bmj.n71

Pataranutaporn, P., Leong, J., Danry, V., Lawson, A. P., Maes, P., & Sra, M. (2022). AI-Generated Virtual Instructors Based on Liked or Admired People Can Improve Motivation and Foster Positive Emotions for Learning. In Proceedings of 2022 IEEE Frontiers in Education Conference (FIE) (Uppsala, Sweden, 08-11 October 2022). IEEE. https://doi.org/10.1109/FIE56618.2022.9962478

Pavlik, J. V. (2023). Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism and Mass Communication Educator, 78(1), 84-93. https://doi.org/10.1177/10776958221149577

Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20(2), Article 07. https://doi.org/10.53761/1.20.02.07

Pichai, S. (2023, February 6th). An important next step on our AI journey. Google. http://bit.ly/3YZj9E2

Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67.

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 1-22. https://doi.org/10.37074/jalt.2023.6.1.9

Sabzalieva, E., & Valentini, A. (2023). ChatGPT e inteligencia artificial en la educación superior: Guía de inicio rápido (ED/HE/IESALC/IP/2023/12). UNESCO e Instituto Internacional de la UNESCO para la Educación Superior en América Latina y el Caribe. https://bit.ly/3oeYm2f

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected? arXiv, Article arXiv:2303.11156v1. https://doi.org/10.48550/arXiv.2303.11156

Sallam, M. (2023). ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare, 11(6), Article 887. https://doi.org/10.3390/healthcare11060887

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424. https://doi.org/10.1017/S0140525X00005756

Sivasubramanian, S. (2023, April 13th). Announcing New Tools for Building with Generative AI on AWS. AWS. https://bit.ly/3mziFXM

Šlapeta, J. (2023). Are ChatGPT and other pretrained language models good parasitologists? Trends in parasitology, 39(5), 314-316. https://doi.org/10.1016/j.pt.2023.02.006

Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., & Hashimoto, T. B. (2023). Alpaca: A Strong, Replicable Instruction-Following Model. Stanford University. https://bit.ly/444TrRx

Thatcher, J., Wright, R. T., Sun, H., Zagenczyk, T. J., & Klein, R. (2018). Mindfulness in Information Technology Use: Definitions, Distinctions, and a New Measure. MIS Quarterly, 42(3), 831-847. https://doi.org/10.25300/MISQ/2018/11881

Thoppilan, R., Freitas, D. D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker, L., Du, Y., Li, Y., Lee, H., Zheng, H. S., Ghafouri, A., Menegali, M., Huang, Y., Krikun, M., Lepikhin, D., Qin, J., … Le, Q. (2022). LaMDA: Language Models for Dialog Applications. arXiv, Article arXiv:2201.08239v3. https://doi.org/10.48550/arXiv.2201.08239

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879

Thurzo, A., Strunga, M., Urban, R., Surovková, J., & Afrashtehfar, K. I. (2023). Impact of Artificial Intelligence on Dental Education: A Review and Guide for Curriculum Update. Education Sciences, 13(2), Article 150. https://doi.org/10.3390/educsci13020150

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), Article 15. https://doi.org/10.1186/s40561-023-00237-x

Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models. arXiv, Article arXiv:2302.13971v1. https://doi.org/10.48550/arXiv.2302.13971

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433

UNESCO. (2019). Beijing Consensus on Artificial Intelligence and Education. International Conference on­ Artificial Intelligence and Education, Planning Education in the AI Era: Lead the Leap, Beijing, China. ­https://bit.ly/3n7wBIK

UNESCO. (2021). Inteligencia artificial y educación: Guía para las personas a cargo de formular políticas. UNESCO. https://bit.ly/3Hl93Hj

UNESCO. (2022). Recomendación sobre la ética de la inteligencia artificial. UNESCO. https://bit.ly/3nc3Yu1

van der Zant, T., Kouw, M., & Schomaker, L. (2013). Generative artificial intelligence. In V. C. Müller (Ed.), Philosophy and Theory of Artificial Intelligence (107-120). Springer-Verlag. https://doi.org/10.1007/978-3-642-31674-6_8

Vartiainen, H., & Tedre, M. (2023). Using artificial intelligence in craft education: crafting with text-to-image generative models. Digital Creativity, 34(1), 1-21. https://doi.org/10.1080/14626268.2023.2174557

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA (pp. 5998-6008).

Vázquez-Ingelmo, A., García-Peñalvo, F. J., & Therón, R. (2021). Towards a Technological Ecosystem to Provide Information Dashboards as a Service: A Dynamic Proposal for Supplying Dashboards Adapted to Specific Scenarios. Applied Sciences, 11(7), Article 3249. https://doi.org/10.3390/app11073249

Wang, T., & Cheng, E. C. K. (2021). An investigation of barriers to Hong Kong K-12 schools incorporating Artificial Intelligence in education. Computers and Education: Artificial Intelligence, 2, Article 100031. https://doi.org/10.1016/j.caeai.2021.100031

Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Yin, B., & Hu, X. (2023). Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond. arXiv, Article arXiv:2304.13712v2. https://doi.org/10.48550/arXiv.2304.13712

Yilmaz, R., Yurdugül, H., Karaoğlan Yilmaz, F. G., Şahi̇n, M., Sulak, S., Aydin, F., Tepgeç, M., Müftüoğlu, C. T., & Ömer, O. (2022). Smart MOOC integrated with intelligent tutoring: A system architecture and framework model proposal. Computers and Education: Artificial Intelligence, 3, Article 100092. https://doi.org/10.1016/j.caeai.2022.100092

Zhang, L., Basham, J. D., & Yang, S. (2020). Understanding the implementation of personalized learning: A research synthesis. Educational Research Review, 31, Article 100339. https://doi.org/10.1016/j.edurev.2020.100339

Zhang, R., Han, J., Liu, C., Gao, P., Zhou, A., Hu, X., Yan, S., Lu, P., Li, H., & Qiao, Y. (2023). LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. arXiv, Article arXiv:2303.16199v2. https://doi.org/10.48550/arXiv.2303.16199

Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … Wen, J.-R. (2023). A Survey of Large Language Models. arXiv, Article arXiv:2303.18223v10. https://doi.org/10.48550/arXiv.2303.18223

Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y., Ma, X., Efrat, A., Yu, P., Yu, L., Zhang, S., Ghosh, G., Lewis, M., Zettlemoyer, L., & Levy, O. (2023). LIMA: Less Is More for Alignment. arXiv, Article arXiv:2305.11206v1. https://doi.org/10.48550/arXiv.2305.11206


Date of reception: 01 June 2023
Date of acceptance: 01 July 2023
Date of publication in OnlineFirst: 20 July 2023
Date of publication: 01 January 2024