Composing with AI - Introduction
Nupoor Ranade and Douglas Eyman
A Very Brief History of Generative AI in Writing Studies
Generative AI has a fairly short history, with the technology being initially introduced during the 1960s, in the form of chatbots (Foote, 2022). The first historical example of generative AI, ELIZA, was created in 1961 by Joseph Weizenbaum: ELIZA could respond to a human’s requests using natural language. During the 1970s the development of AI had stalled due to a lack of technological capacity, such as the extensive storage systems needed to manage high volumes of data and the computing power needed to work with those data sets. However, work on machine learning tools was still in progress and automation systems that could take on repetitve tasks, like answering the phone and transferring calls, were common. In the 1990s the research took off again due to the multilayered artificial neural networking algorithms and powerful graphic processing units (GPUs) invented for computer gaming (Foote, 2022). Conversational agents like Siri and Cortana became popular starting in 2011 and the journalism industry saw a big spike in the use of natural language processing technology for generating content in standardized genres such as sports results (Matsumoto et al., 2007; Clerwall, 2014).
The biggest changes we have seen since then are the move toward multimodality and increased accuracy in image recognition, as AI applications can detect and produce images, video, and audio (and accept audio input and produce audio output for its chat functions). Nevertheless, the majority of applications still focus on textual production at this point. The astute reader will notice that the chapters in this collection are oriented toward textuality moreso than being multimodal works: this is in part a response to the inherently textual nature of ChatGPT at the time of this writing.
As of spring 2024, we’ve noticed that only a relatively small number of faculty are taking the time to understand how generative AI works and how it might be taught in college-level courses across the curriculum. The massive hyperbole offered by the companies that stand to profit the most from pushing the inclusion and use of large language model AI applications hasn't helped, from the CEO of Alphabet (which owns Google) declaring that AI is "more profound than fire or electricity or anything we've done in the past" (CBS News, 2023) (cite) to AI researchers claiming that the current systems are capable of being a threat to humanity (both of which stances are exceptionally far-fetched to say the least). Many faculty expect their institutions to provide guidance, but not all institutions are taking even the most basic steps to consider developing policy around these new tools. To be sure, while they can't "think," these AI applications can be effective tutors, useful interlocutors that can help develop ideas and arguments, and they are quite good at summarizing longer, more complex texts (as long as they aren't too technical): these tools are indeed poised to disrupt our traditional approaches to teaching and learning, and the institutions that are choosing to completely ignore the issue are doing a disservice to both their faculty and their students.