Generative AI Literature Review

 

 

This paper presents a critical analysis of generative Artificial Intelligence (AI) detection tools in higher education assessments. The rapid advancement and widespread adoption of generative AI, particularly in education, necessitates a reevaluation of traditional academic integrity mechanisms. We explore the effectiveness, vulnerabilities, and ethical implications of AI detection tools in the context of preserving academic integrity. Our study synthesises insights from various case studies, newspaper articles, and student testimonies to scrutinise the practical and philosophical challenges associated with AI detection. We argue that the reliance on detection mechanisms is misaligned with the educational landscape, where AI plays an increasingly widespread role. This paper advocates for a strategic shift towards robust assessment methods and educational policies that embrace generative AI usage while ensuring academic integrity and authenticity in assessments.

LINK TO ARTICLE

In the rapidly evolving landscape of education, the pivotal axis around which transformation revolves is human-AI interaction. In this sense, this paper adopts a data mining and analytic approach to understand what the related literature tells us regarding the trends and patterns of generative AI research in educational praxis. Accordingly, this systematic exploration spotlights the following research themes: Interaction and communication with generative AI-powered chatbots; impact of the LLMs and generative AI on teaching and learning, conversational educational agents and their opportunities, challenges, and implications; leveraging Generative AI for enhancing social and cognitive learning processes; promoting AI literacy for unleashing future opportunities; harnessing Generative AI to expand academic capabilities, and lastly, augmenting educational experiences through human-AI interaction. Beyond the identified research themes and patterns, this paper argues that emotional intelligence, AI literacy, and prompt engineering are the trending research topics that require further exploration. Accordingly, it’s in this praxis that emotional intelligence emerges as a pivotal attribute, as AI technologies often struggle to comprehend and respond to the nuanced emotional cues. Generative AI literacy then takes center stage, becoming an indispensable asset in an era permeated with AI technologies, equipping students with the tools to critically engage with AI systems, thereby ensuring they become active, discerning users of these powerful tools. Concurrently, prompt engineering, the art of crafting queries that yield precise and valuable responses from AI systems, empowers both educators and students to maximize the utility of AI-driven educational resources.

LINK TO ARTICLE

The software development industry is amid another disruptive paradigm change— adopting the use of generative AI (GAI) assistants for programming. Whilst AI is already used in various areas of software engineering [1], GAI technologies, such as GitHub Copilot and ChatGPT, have ignited peoples’ imaginations (and fears [2]). It is unclear how the industry will adapt, but the move to integrate these technologies by large software companies, such as Microsoft (GitHub1 , Bing2 ) and Google (Bard3 ), is a clear indication of intent and direction. We performed exploratory interviews with industry professionals to understand current practice and challenges, which we incorporate into our vision of a future of software development education and make some pedagogical recommendations.

LINK TO ARTICLE

This study aimed to explore the experiences, perceptions, knowledge, concerns, and intentions of Gen Z students with Gen X and Gen Y teachers regarding the use of generative AI (GenAI) in higher education. A sample of students and teachers were recruited to investigate the above using a survey consisting of both open and closed questions. The findings showed that Gen Z participants were generally optimistic about the potential benefits of GenAI, including enhanced productivity, efficiency, and personalized learning, and expressed intentions to use GenAI for various educational purposes. Gen X and Gen Y teachers acknowledged the potential benefits of GenAI but expressed heightened concerns about overreliance, ethical and pedagogical implications, emphasizing the need for proper guidelines and policies to ensure responsible use of the technology. The study highlighted the importance of combining technology with traditional teaching methods to provide a more effective learning experience. Implications of the findings include the need to develop evidence-based guidelines and policies for GenAI integration, foster critical thinking and digital literacy skills among students, and promote responsible use of GenAI technologies in higher education.

LINK TO ARTICLE

Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.

LINK TO ARTICLE

Artificial Intelligence has greatly revolutionized education in many aspects. Today, AI-enabled language models, such as ChatGPT, are gaining popularity due to their characteristics and benefits. However, users also consider them a threat to educational integrity and purposes. This research examined ChatGPT usage among students in the United Arab Emirates (UAE), their views, concerns, and perceived ethics. The data was gathered from 388 students from two universities in Al Ain city using Yamane’s formula. Findings showed that students consider ChatGPT a revolutionary technology that helps students in many ways. The gathered data showed that the effect of ChatGPT Usage remained significant on students’ views. The path analysis also supported the second hypothesis, proposing the significant effect of ChatGPT on Students’ Concerns. Finally, the findings also indicated the validation of the final hypothesis, showing the significant effect of ChatGPT Usage on the Perceived Ethics among the students in the UAE. Therefore, this study concluded that using ChatGPT in education has useful and concerning effects on educational integrity. However, implementing practical guidelines can assist in making informed decisions and shaping policies within educational institutions. Recognizing the complexities and importance of ChatGPT usage, teachers and policymakers can keep a balance by leveraging Artificial Intelligence technology to improve education while upholding ethical practices that promote critical thinking, originality, and integrity among students.

 

LINK TO ARTICLE

Generative AI technologies, such as large language models, have the potential to revolutionize much of our higher education teaching and learning. ChatGPT is an impressive, easy-to-use, publicly accessible system demonstrating the power of large language models such as GPT-4. Other compa- rable generative models are available for text processing, images, audio, video, and other outputs - and we expect a massive further performance increase, integration in larger software systems, and diffusion in the coming years. This technological development triggers substantial uncertainty and change in university-level teaching and learning. Students ask questions like: How can ChatGPT or other artificial intelligence tools support me? Am I allowed to use ChatGPT for a seminar or final paper, or is that cheating? How exactly do I use ChatGPT best? Are there other ways to access models such as GPT-4? Given that such tools are here to stay, what skills should I acquire, and what is obsolete? Lecturers ask similar questions from a different perspective: What skills should I teach? How can I test students' competencies rather than their ability to prompt generative AI models? How can I use ChatGPT and other systems based on generative AI to increase my efficiency or even improve my students' learning experience and outcomes? Even if the current discussion revolves around ChatGPT and GPT-4, these are only the forerunners of what we can expect from future generative AI-based models and tools. So even if you think ChatGPT is not yet technically mature, it is worth looking into its impact on higher education. This is where this whitepaper comes in. It looks at ChatGPT as a contemporary example of a conversational user interface that leverages large language models. The whitepaper looks at ChatGPT from the perspective of students and lecturers. It focuses on everyday areas of higher education: teaching courses, learning for an exam, crafting seminar papers and theses, and assessing students' learning outcomes and performance. For this purpose, we consider the chances and concrete application possibilities, the limits and risks of ChatGPT, and the underlying large language models. (...)

LINK TO ARTICLE

This experimental study investigates the impact of integrating Chat GPT (Generative Pre-trained Transformer) on student learning outcomes in technology education at Universitas Muhammadiyah Muara Bungo. The research involves an experimental group using Chat GPT and a control group with conventional methods. Data from 31 participants in each group were collected, assessing learning outcomes through final test scores. Analyzing the results with a t-test, the experimental group displayed significantly higher achievements than the control group, highlighting the positive effect of incorporating GPT Chat into educational technology. The study illuminates the potential of AI-powered chatbots like Chat GPT to enhance student learning outcomes. Further exploration is required to gauge its adaptability across diverse educational contexts for more enhanced learning results. T-test results, conducted at a 95% confidence level with α 0.05, and degrees of freedom dk = n1 + n2 - 2 = 60, showed tcount of 5.424 against ttable of 2.000, firmly establishing tcount > ttable (5.424 > 2.000). Consequently, the null hypothesis (H0) proposing no significant impact of Chat GPT utilization is refuted. Conversely, the alternative hypothesis (H1), signifying a significant influence from Chat GPT usage, is upheld, affirming its substantial role in students' technological education.

LINK TO ARTICLE

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.

LINK TO ARTICLE

Generative artificial intelligence (AI) has taken the world by storm, with notable tension transpiring in the field of education. Given that Generative AI is rapidly emerging as a transformative innovation, this article endeavors to offer a seminal rejoinder that aims to (i) reconcile the great debate on Generative AI in order to (ii) lay the foundation for Generative AI to co-exist as a transformative resource in the future of education. Using critical analysis as a method and paradox theory as a theoretical lens (i.e., the “how”), this article (i) defines Generative AI and transformative education (i.e., the “ideas”), (ii) establishes the paradoxes of Generative AI (i.e., the “what”), and (iii) provides implications for the future of education from the perspective of management educators (i.e., the “so what”). Noteworthily, the paradoxes of Generative AI are four-fold: (Paradox #1) Generative AI is a ‘friend’ yet a ‘foe’, (Paradox #2) Generative AI is ‘capable’ yet ‘dependent’, (Paradox #3) Generative AI is ‘accessible’ yet ‘restrictive’, and (Paradox #4) Generative AI gets even ‘popular’ when ‘banned’ (i.e., the “what”). Through a position that seeks to embrace rather than reject Generative AI, the lessons and implications that emerge from the discussion herein represent a seminal contribution from management educators on this trending topic and should be useful for approaching Generative AI as a game-changer for education reformation in management and the field of education at large, and by extension, mitigating a situation where Generative AI develops into a Ragnarök that dooms the future of education of which management education is a part of (i.e., the “so what”).

LINK TO ARTICLE

This paper explores the intersection of data-driven learning (DDL) and generative AI (GenAI), represented by technologies like ChatGPT, in the realm of language learning and teaching. It presents two complementary perspectives on how to integrate these approaches. The first viewpoint advocates for a blended methodology that synergizes DDL and GenAI, capitalizing on their complementary strengths while offsetting their individual limitations. The second introduces the Metacognitive Resource Use (MRU) framework, a novel paradigm that positions DDL within an expansive ecosystem of language resources, which also includes GenAI tools. Anchored in the foundational principles of metacognition, the MRU framework centers on two pivotal dimensions: metacognitive knowledge and metacognitive regulation. The paper proposes pedagogical recommendations designed to enable learners to strategically utilize a wide range of language resources, from corpora to GenAI technologies, guided by their self-awareness, the specifics of the task, and relevant strategies. The paper concludes by highlighting promising avenues for future research, notably the empirical assessment of both the integrated DDL-GenAI approach and the MRU framework.

LINK TO ARTICLE

This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI's output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementarity of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop," the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms.

LINK TO ARTICLE

ESL/EFL writing instructors who find themselves in the thick of it, before any institutional or broader policies regarding AI writing tools have been made, likely fall into three camps. The first camp includes those who chose to ignore the technology and carry on as usual, possibly making use of AI detection tools at most. There are already AI tools emerging to check if work has been done by AI (see https://openai-openai-detector.hf.space/, https://gptzero.me/, https://writer.com/ai-content-detector, or https://openai-openai-detector.hf.space/) that they could make use of. Second, rather than focus on detection, many instructors will try to circumvent AI use altogether. The final group is made up of those educators who will choose to embrace AI. In this paper I am more interested in sharing ways that instructors can either circumvent or embrace the use of AI writing tools. 

LINK TO ARTICLE

Institutes of higher education (IHEs) have to consider benefits of remote learning post-pandemic. Retrogression to physical contact is counterproductive. The hasty implementation of remote learning during the pandemic deprived IHEs of opportunities to efficiently enact and theorise about it. Post-pandemic, IHEs have opportunities to theorise about remote learning hence the questions; a) what type of learning emerges when asynchronous and technology-as-essence framework undergirds students learning? b) What benefits accrue when chat-Generative Pre-training Transformer (chat-GPT) is infused into students learning? Use of synchronous learning and technology-as-utility framework to underpin remote learning during the pandemic was intended to retain most of physical contact learning traditions. Teachers and students met synchronously and simultaneously online for learning to occur. IHEs safeguarded their operational efficiency to minimise the disruptive nature of remote learning. The purpose of the study was to theoretically examine effects of asynchronous learning and “technology-as-essence framework on students learning. Asynchronous learning occurs when students registered on the same course learn online on their own schedule without any real-time interactions with teachers. This phenomenon occurs when remote learning develops through technological advances that, beyond 2030, would most likely stream educational courses similar to Netflix. One such technological advance is chat-GPT. A study was undertaken to better understand it. 15 multi-disciplinary advanced undergraduates tested out chat-GPT on their assignments and a concrete problem. Chat-GPT lessened the time of doing assignments and improves students’ problem solving abilities. AI systems advances have a positive effect on students learning. The study addresses the positive impact of asynchronous learning and advances in technology on IHEs.

LINK TO ARTICLE

The release of ChatGPT has sparked significant academic integrity concerns in higher education. However, some commentators have pointed out that generative artificial intelligence (AI) tools such as ChatGPT can enhance student learning, and consequently, academics should adapt their teaching and assessment practices to embrace the new reality of living, working, and studying in a world where AI is freely available. Despite this important debate, there has been very little academic literature published on ChatGPT and other generative AI tools. This article uses content analysis to examine news articles (N=100) about how ChatGPT is disrupting higher education, concentrating specifically on Australia, New Zealand, the United States, and the United Kingdom. It explores several key themes, including university responses, academic integrity concerns, the limitations and weaknesses of AI tool outputs, and opportunities for student learning. The data reveals mixed public discussion and university responses, with a focus mainly on academic integrity concerns and opportunities for innovative assessment design. There has also been a lack of public discussion about the potential for ChatGPT to enhance participation and success for students from disadvantaged backgrounds. Similarly, the student voice is poorly represented in media articles to date. This article considers these trends and the impact of AI tools on student learning at university.

LINK TO ARTICLE

Generative Artificial Intelligence has rapidly expanded its footprint of use in educational institutions. It has been embraced by students, faculty, and staff alike. The technology is capable of carrying out a sustained sequence of interactive dialogs and creating reasonably meaningful text. Not surprisingly it seems to be routinely used by faculty to generate questions and assignments, by students to submit assignments and aid in self-learning, and administration to create manuals, memoranda, and policy documents. With its potential to lead to significant social innovation, tethering on the verge of becoming a disruptive technology, it seems most unlikely that it will fade away without being fully enfolded into almost all aspects of academic and pedagogical activity. While it is early to predict the exact place of this technology in education, we present thoughts to aid deliberations and give a brief review of the opportunities and challenges.

LINK TO ARTICLE