Generative AI and the History of Writing

Anthony Atkins and Colleen A. Reilly

Masking Mediation

Throughout the history of computers and writing and digital rhetoric, scholars have grappled with and embraced disruptive information and communication technologies and the challenges they pose to their contemporary research and composing practices. As Bolter and Grusin (1999) and Bolter and Gromala (2003) highlight, new technologies often obscure their mediative practices, promising to provide direct, unmediated access to information, entertainment, and production. Any user’s understanding and awareness of a digital technologies’ process and depth of mediation—the degree to which the technology draws attention to its means of production, to how it provides access to content or delivers results—is partly determined by users’ prior experience with similar technologies, proficiency with using new technologies, and curiosity about how the technologies work. Communication and information technologies gain power in part by obscuring the degree of mediation and promising users that they will provide direct access to knowledge and information without requiring users to comprehend how the content is developed and delivered. One way to accomplish this transparency is to use the structural conventions of established communication technologies to achieve transparency (Bolter & Grusin, 1999; Hocks, 2003).

The way that numerous scholars in computers and writing and digital rhetoric have approached teaching with Wikipedia since its inception in 2001 provides a productive model for navigating and composing with technologies that downplay and obscure their processes of mediation like generative AI; as a result, we will explore that model throughout this section. Like Wikipedia, generative AI can be approached as a transparent window into vast amounts of easily accessible knowledge without the user’s need to understand the technical mechanisms facilitating its production. Both Wikipedia and generative AI proffer information that appears reasonable and professional (Lancaster, 2023), persuading users, like our students, to accept the output at face value without critically examining its veracity.

To combat the seductive transparency of technologies like Wikipedia, scholars developed structured inquiry and assignments designed to transform student users from passive consumers into critical producers of content. In the case of Wikipedia, this requires individuals to work behind the surface of the encyclopedic content as displayed to understand the layers contributing to and supporting its production, including the organizational structures and policies, the debates within the Wikipedia community, and the technical knowhow needed to contribute correctly formatted content. As Reilly (2011) explains in her article about teaching Wikipedia as a “complex discourse community and multi-layered, knowledge-making experiment,” empowering student users to become critical producers of content with Wikipedia requires that they literally look behind article layer of the text to interact with and contribute to the layers of the Wiki technology that allow them to engage in conversation with other contributors (Discussion tab), edit the content of the article (Editing tab), and examine the complete history of changes to the text (History tab).

Based on their analysis of large-scale (6000 students) survey research by the Wiki Education Foundation, Vetter et al. (2019) recommend best practices for Wikipedia writing assignments that include making the assignments “extended and substantial” to allow students to “learn about values, processes, and conventions of Wikipedia” (p. 62). Vetter et al. (2019) also recommend critical analyses of Wikipedia articles prior to contributing to develop critical thinking around how Wikipedia is developed and how content is supported and sources cited. To design such opportunities for analyzing and contributing to Wikipedia, instructors need professional development to learn the content and technological intricacies of the platform (Sura, 2015).

In addition to analyzing the content and contributing to it, McDowell and Vetter (2020) argue that Wikipedia’s very practices and policies, particularly those requiring citation and verification for information, serve a pedagogical purpose and can be harnessed to aid students to develop information literacies related to the legitimacy of online information. The polices prompt students to learn how to analyze information for its veracity—not rely on others (McDowell & Vetter, 2020; see also Azar, 2023). In addition, Wikipedia has the benefit of being a community governed by the collective (run by a nonprofit): new participants can learn to navigate the norms (McDowell & Vetter, 2020) and work with other contributors in asynchronous but interactive manner. As Purdy and Walker (2012) explain, contributing to wiki-based compositions foreground the importance of dialogue for knowledge production. McDowell and Vetter (2020) argue that Wikipedia's policies requiring verification, a neutral tone, collaboration, and citation educate new users and enlist them to maintain the legitimacy of content on the site and participate in removing or questioning that which is not supported by sources and verifiable knowledge (Azar, 2023)—through such policies, contributors to Wikipedia develop critical digital and information literacies that they can employ in other contexts. Finally, Wikipedia uses community-based policies “to reconstruct more traditional models of authority” that support the legitimacy and veracity of the content; and Wikipedia is transparent about its purpose and intentions unlike most other (commercial) sites and apps online (McDowell and Vetter, 2020).

Many of the lessons outlined above related to working with Wikipedia and exposing its processes of content mediation to conscious examination and interrogation can be adapted to help our students work with and critically analyze the output of generative AI. This process can begin by examining how generative AIs produce content. Byrd (2023), Lancaster (2023), and many others help to demystify how AI technologies like ChatGPT work from a technical perspective for instructors and students. As was the case when teaching with Wikipedia and other new technologies, highlighting the technology’s processes of mediation entails gaining a basic understanding of the technical specifications that power it. Students need to learn that ChatGPT, for example, is a large language model (LLMs) meaning that it has been trained to produce new language modeled on the texts it has processed and was asked to generate (Byrd, 2023, Lancaster, 2023). As Byrd (2023) clearly explains, “[LLMs] have really created mathematical formulas to predict the next token in a string of words from analyzing patters of human language. They learn a form of language, but do not understand the implicit meaning behind it” (p. 136). As a result, AIs can produce false information when the corpus it uses does not contain the accurate content (Byrd, 2023; Cardon, et al., 2023; Lancaster, 2023, Hatem, 2023). Understanding these technical processes can help students to approach the output from AIs more critically and skeptically, just as they are taught to do in relation to Wikipedia. Such instruction provides inoculation, demystifying the output and opening it to interrogation.

Edzordzi Agbozo, Assistant Professor, University of North Carolina Wilmington, describes innovative assignments that he uses to help students interrogate the output of generative AIs.

Video Transcript

Edzordzi: Hello Colleen and Tony. You gave me a question about some central lessons or a central lesson that our field of computers and writing could learn from—could use lessons from the past in order to engage with the new phenomenon of generative AI and its upsurge in writing.
Edzordzi: My response is that the field of composition has always been intentionally engaged with technologies. And so when I saw your question, Jason Palmeri and Ben McCorkle's 2021 book on writing pedagogy over the last 100 years came to mind.
Edzordzi: Scholars in the field have always demonstrated that technology is their resource. So I think that we should continue treating writing generative AI as a resource going forward. Specifically, I have two lessons or thoughts on the question.
Edzordzi: The first is that we must continue to interrogate writing itself, based on the research and practices of the past. We have come to blur the differences between writing as alphanumeric scribbling to writing as a very complex technology mediated form of communication. So in this moment of ChatGPT and then generative AI tools, I think that we need to revisit this question of writing and think of writing as a post human activity. A post human activity that involves, to a large extent, generative AI. But also help our students to understand the limits and the limits in that relationship between the generative AI and writing.
Edzordzi: Two: The lessons from the past. Sure, as that our field has not rejected technologies for the image. So I would say we must incorporate generative AI into our pedagogy. Generative AI has become a significant tool of our time, and whether we like it or not, our students are going to use them. So it is important that we bring this into our classroom and then curate how students can have some kind of relationship with this technology.
Edzordzi: In my own class I do this in two ways: One, comparative AI texts and student text analysis. So in some of my classes, students generate responses to prompts and feed the same prompts to ChatGPT. Then they compare the AI writing in response to the prompt to ChatGPT's writing the response to the prompt and then analyze. They analyze to look for style, vocabulary, complexity of vocabulary, citation, and all those other things that we look for when we look at writing.
Edzordzi: The second is brainstorming activities. In some of my classes, I use ChatGPT as the first tool to think about a topic that students want to write on. So we would put our topics into ChatGPT and ChatGPT generates the ideas related to the topic. And then students now form a topic out of the various ideas that ChatGPT has produced.
Edzordzi: These two activities help students to understand that ChatGPT could be useful, but it still needs the human creative elements that would make writing a complex, exciting activity which at this moment ChatGPT doesn't have or generative AI doesn't have.
Edzordzi: So going forward, I would say or in conclusion, I would say I think the field needs to revisit what it means to do writing at this moment, and hopefully in the future. And two, we must continue to incorporate generative AI into our classes, and help students form an ethical relationship with these technologies before they step out into the world.
Edzordzi: Thank you very much.

Students also need to be taught the protocols for productive use of generative AIs as they do with Wikipedia. Prompt engineering is the process of iteratively developing instructions and queries to submit to the AI to garner superior output (Korzynski et al., 2023; Lo, 2023). As Korzynski et al. (2023) emphasize, prompt engineering is a human language process requiring collaboration with AI. Just as students must learn how to structure and tag their Wikipedia articles to meet the genre specifications approved of by the community, students also have to structure queries to gain the best results from the AI. They also need to dialogue with the AI as they did with other contributors in the Talk tab in Wikipedia to participate fully in a successful collaboration. The obvious difference is that when writing for Wikipedia, students collaborate with other users not an AI. A number of scholars have developed frameworks to guide prompt engineering. For example, Lo (2023) outlines the CLEAR Framework: prompts to AIs should be concise, logical, explicit, adaptive, and reflective. Importantly, this framework emphasizes that success is produced iteratively and contextually in response to the output of the AI and purpose of use (Lo, 2023). Korzynski et al. (2023) review a range of other similar approaches to prompt engineering; they outline the essential elements of useful prompts, including the context or role, the instruction or task, the relevant data, and the genre or form for output (p. 30). Such discussions of prompt engineering emphasize that scholars, instructors, and students can learn to productively collaborate with generative AI, as they do with Wikipedia and its corresponding community, and overcome hurdles to engaging with it productively.

Just as scholars recommend critically analyzing Wikipedia articles prior to using content from and contributing to Wikipedia, so must students learn to do the same with the output of generative AIs. Lancester (2023) recommends finding sources to corroborate and support the veracity of content generated by AIs. Scholars already report developing assignments that ask students to investigate the veracity and usefulness of text produced by an AI (Byrd, 2023). As noted in the previous section, once students understand that the quality of output is driven in part by the quality of the input, they may gain agency and confidence to critique the resulting information produced by that AI.

Some of our lessons from teaching with other technologies like Wikipedia do not apply to generative AIs. For example, as discussed above, the mechanisms of mediation by generative AIs are often proprietary, making it impossible to fully comprehend how the technology delivers content as we can with Wikipedia. AIs are commercial enterprises unlike Wikipedia, which is a nonprofit. In response, Byrd (2023) recommends using open-access LLMs instead to produce more ethically and fully transparent content. Finally, generative AI’s rapid evolution may eventually make detecting its output as the work of machine-generated mediated content less possible by human readers. Lancaster (2023) proposes a process of adding watermarks to AI-generated content but acknowledges the potential futility of that approach. Additionally, this approach would require coordination and cooperation with corporate entities that, as noted above, maintain control over their technologies and standards and have little to gain from revealing their proprietary information and demystifying the power of their chatbots to magically anticipate users' needs and surprise them with content they can use as their own.

As the above discussion reflects, the work of scholars and instructors with previous technologies like Wikipedia can provide insights about what questions to ask and how to advocate for restrictions and guidelines to protect students and the public in their work with generative AIs. Developing educational policies and best practices around writing with and using digital content, like Wikipedia, was necessary, and invested scholars in our profession need to do the same for AI.

Gavin P. Johnson, Assistant Professor, Texas A&M University, Commerce, emphasizes the importance of remembering the intersections between identity, power, and technology.

Video Transcript

Gavin: When considering the lessons that we can learn from computers and writing and how they can impact our pedagogies with artificial intelligence and generative AI, part of what I really think is important for us to remember are the intersections between identity, power, and technology. Now, a very common thread in the literature in our scholarship is that technologies are never neutral. They are always rhetorical and political and material.
Gavin:And so when we're thinking about how technologies are never neutral, and we apply that to AI technologies, we have to think critically about what is training AI. Not only do we have to think critically about that, we have to prepare our students to think critically about what is training AI, what data is being pulled to train that AI, and how is that training impacting its algorithm and how that impacts the way that students and other writers and composers will interact with the technology.
Gavin: One of the main concerns I have, and this is also something that I think is worth pulling from the history of computers and writing, are issues of privacy and surveillance as they relate to AI. And teaching students about privacy and surveillance is absolutely essential right now.
Gavin: And so one of the big lessons that I hope that we take from the history of computers in writing is to think about how technology is not neutral, how issues of privacy and surveillance need to be interrogated, because privacy and surveillance disproportionately impact and negatively impact marginalized communities.
Gavin: And also just thinking about how do students navigate these complex topics without giving up? I am quite concerned about surveillance apathy, privacy apathy, where students just feel that because they live so much of their lives with technology, there's no getting out of a surveillance culture.
Gavin: And so I think it's important for us to empower students both through our critical teaching but also by engaging students in how to use technologies ethically. And how to think about technologies as ethical tools that can be used for good.
Gavin: The other aspect that I think is particularly important that our field has been really strong and doing, and it's a lesson that I think we need to carry forward with this, is our interdisciplinarity and our thinking about how knowledge is not siloed, but that it importantly crosses physical, discursive boundaries to make new knowledge. So I think it's really important that in this moment we continue to push towards interdisciplinary thinking.
Gavin: Not only searching out technical experts and computer sciences, but also seeking out our colleagues in education and philosophy and really thinking critically about how do we all work together to think through AI and teach with AI?