Composing with AI - Introduction

Nupoor Ranade and Douglas Eyman

A Very Brief History of Generative AI in Writing Studies

Implementation of software programs labeled "Artificial Intelligence" (AI) has a fairly short history, with the technology being initially introduced during the 1960s, in the form of chatbots (Foote, 2022). The first historical example of generative AI, ELIZA, was created in 1961 by Joseph Weizenbaum: ELIZA could respond to a human's requests using natural language. During the 1970s the development of AI had stalled due to a lack of technological capacity, such as the extensive storage systems needed to manage high volumes of data and the computing power needed to work with those data sets. However, work on machine learning tools was still in progress and automation systems that could take on repetitive tasks, like answering the phone and transferring calls, were common. In the 1990s the research took off again due to the multilayered artificial neural networking algorithms and powerful graphic processing units (GPUs) invented for computer gaming (Foote, 2022). Conversational agents like Siri and Cortana became popular starting in 2011 and the journalism industry saw a big spike in the use of natural language processing technology for generating content in standardized genres such as sports results (Matsumoto et al., 2007; Clerwall, 2014).

The biggest changes we have seen since then are the move toward multimodality, increased accuracy in image recognition, and (as of recently) generative AI, as AI applications can detect and produce text, images, video, and audio (and accept audio input and produce audio output for its chat functions). Nevertheless, the majority of applications still focus on textual production at this point. The astute reader will notice that the chapters in this collection are oriented toward textuality more so than being multimodal works: this is in part a response to the inherently textual nature of ChatGPT at the time of this writing.

As of Spring 2025, we've noticed that only a relatively small number of faculty are taking the time to understand how generative AI works and how it might be taught in college-level courses across the curriculum. The massive hyperbole offered by the companies that stand to profit the most from pushing the inclusion and use of large language model AI applications hasn't helped, from the CEO of Alphabet (which owns Google) declaring that AI is "more profound than fire or electricity or anything we've done in the past" (CBS News, 2023) to AI researchers claiming that the current systems are capable of being a threat to humanity (both of which stances are exceptionally far-fetched to say the least). Many faculty expect their institutions to provide guidance, but not all institutions are taking even the most basic steps to consider developing policy around these new tools. To be sure, while they can't "think," these AI applications can be effective tutors, useful interlocutors that can help develop ideas and arguments, and they can be used to summarize longer, more complex texts (as long as they aren't too technical): these tools are indeed poised to disrupt our traditional approaches to teaching and learning, and we have yet to see a coherent response to the challenges posed by these technologies.

Research on generative AI in writing studies took off exponentially after the release of ChatGPT in November 2022. According to Open AI, GPT4 called ChatGPT was more inventive and collaborative and could produce, edit, and iterate with users while collaborating with them on artistic and technical writing tasks like songwriting, screenwriting, or identifying their writing style. While AI tools provide convenience, speed and perhaps some grammatical accuracy, students and other academics seem to rely on them to do mundane tasks which require less curiosity, creativity and have low stakes (AlAfnan, et al., 2025). But writing goes beyond that; Morrison (2023) defines it as an act of communication, persuasion, instruction, and exploration. This collection highlights the various aspects of composing which expand the narrow vision of using AI for automating repetitive tasks and instead think about how multimodal composing works in writing and composition spaces and what we should expect to do in classrooms.

We solicited proposals for this collection in early 2023, and there was a concern that the time it takes for development, peer-review, editing, and publication might lead to obsolete understandings of the technologies we are addressing here. Despite the extensive claims made about advancements of the techology by the companies that are creating it, in 2025 we see that generative AI systems are still prone to errors. Researchers at the Tow Center for Digital Journalism found that LLM-based chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead" and that "premium chatbots provided more confidently incorrect answers than their free counterparts" (Jaźwińska and Chandrasekar, 2025). Around the same time, the BBC released a report indicating that generative AI systems often introduced inaccuracies or wholly fabricated information when summarizing news reports (Rahman-Jones, 2025). And biases and stereotypes were still a common problem for outputs in all media, from text to video (Rogers and Turk, 2025).

Despite these shortcomings (and to some extent because of them) we take the position that it is imperative to help students develop critical literacies regarding AI use and misuse, and that the work in this collection provides a foundation for developing those literacies. There are, to be sure, sound reasons to resist incorporating AI in writing classrooms wholesale, and we hope that readers of this collection will also avail themselves of the resources at

Despite these shortcomings (and to some extent because of them) we take the position that it is imperative to help students develop critical literacies regarding AI use and misuse, and that the work in this collection provides a foundation for developing those literacies. There are, to be sure, sound reasons to resist incorporating AI in writing classrooms wholesale, and we hope that readers of this collection will also avail themselves of the resources at Refusing GenAI in Writing Studies (Sano-Franchini, McIntyre, and Fernandes, 2024).