Composing with AI - Introduction

Nupoor Ranade and Douglas Eyman

Chapter Abstracts

Histories: C&W Approaches to Disruptive Technologies

What We already Know: Generative AI and the History of Writing With and Through Digital Technologies
Anthony Atkins and Colleen A. Reilly

Our article examines several themes present in the scholarship related to composing with digital technologies that writing faculty and students can use to inform their work with generative AI. In our survey of relevant scholarship, we organize our discussion around three central themes present in the literature including the challenges posed to our conceptions of writing and authorship; the access and accessibility implications of information and communication technologies; and the degree to which technologies reveal and mask their mediation of content. As we demonstrate in our chapter, the scholarship addressing these themes is as relevant for working and teaching with AI just as it was for working with MOOs (English, 1998), Wikipedia, and a myriad of other information and communication technologies. We argue that we can and should harness our scholarly legacy to help faculty and students navigate the evolving context for writing, composing, editing and design necessitated by the introduction of generative AI.

The Black-Boxed Ideology of Automated Writing Evaluation Software
Antonio Hamilton and Finola McMahon

While conversations around Generative AI writing technologies have taken center stage, these conversations are nothing new. We need only look at the decades-long conversation regarding automated writing evaluation technologies (AWE), which impact the teaching of writing and students’ writing experiences directly. While much research exists regarding AWE and its classroom implementation (Broad, 2006; Herrington & Moran, 2001; Wilson et al., 2021; Huang & Wilson, 2021; Wilson & Roscoe, 2020; Ericsson & Haswell, 2006; Ernst, 2020), the study of specific technologies is difficult as they are often heavily black-boxed. Consequently, instructors, students, and writers, often use these technologies without fully understanding their function and their impact on writing. To address this concern, we conducted a research project beginning in Fall 2021, which sought to answer the following: 1.) What are the ways that AWE software companies black-box their software? and 2.) What are the assessment features of AWE software and what are the commonalities and differences between them?

Addressing these questions will increase our understanding of how AWE impacts the assessment of writing. Much like the recent study from Laflen (2023), this study highlights the importance of understanding how digital software shapes writing feedback and users’ interactions with their writing. Through our research, we found three central themes: 1.) The user’s technical illiteracy was used against them to prevent a full understanding of the programs prior to purchase and to obfuscate the programs’ functioning, 2.) The programs’ websites do not reveal much information about how the algorithms function, and the explanations they did provide revealed little, further black-boxing the feedback, and 3.) This black-boxing appeals to current-traditional rhetoric and the use of static abstractions in writing feedback. Additionally, we noted the technologies’ potential prioritization of certain writing styles (Standard Edited American English) and the disenfranchisement of other Englishes and writing forms.

These programs ask writers to blindly trust and treat AWE as the purveyors of writing knowledge. In doing so, they eliminate complexity in writing instruction and performance. This study provides a beginning analysis of the ways writing is constructed across multiple AWE and how we may pay careful attention to the impact of these programs. It also functions as a model for further study of these programs and a call for more active transparency of the writing norms informing them. This transparency will better allow us to understand the impact of these technologies on the student writers who use them.

Policy: Programs & Publications

Drafting a Policy for Critical Use of AI Writing Technologies in Higher Education
Dan Frank and Jennifer Johnson

In this chapter, Dr. Daniel Frank and Dr. Jennifer Johnson, two writing teachers in the UCSB Writing Program, present a dialectical conversation reflecting on the incorporation of Large Language Model (LLM) AI writing tools like ChatGPT into the writing classroom. They trace the development of the UCSB Writing Program's Policy on ChatGPT and AI Writing, which aims to provide ethical guidance and support for faculty and students in using these technologies.

Frank and Johnson recount their evolving thoughts and pedagogical approaches as LLM tools rapidly advanced and gained prominence. Frank, a self-described early adopter, recognized the disruptive potential of AI writing early on. He shared primers with colleagues to help them understand the tools' capabilities and limitations. Johnson, more wary of new tech, was nonetheless captivated by the implications for writing instruction. Both grappled with key questions around what skills students need in an AI-augmented world. Through faculty workshops and discussions, Frank and Johnson shaped the key principles of the UCSB policy: 1) Integrating AI as one of many supportive feedback tools; 2) Promoting academic integrity through transparency about AI usage; 3) Fostering critical thinking about the tools' biases and limitations; 4) Cautioning against over-reliance on AI detection. In their classrooms, the authors encourage students to use AI writing tools experimentally and iteratively, always reflecting critically on the process. Johnson sees potential for LLMs to support rhetorical skill development and provide more equitable feedback to ELL students. Frank guides students to converse with AI, using it to augment rather than replace their own thinking and voice.

The chapter concludes with video interviews of other writing faculty, revealing a spectrum of approaches to AI in the classroom. While the rapid evolution of the technology precludes any definitive answers, Frank and Johnson advocate for continuing to engage critically and reflectively with AI writing tools. Motivation remains key - will students pursue knowledge and cultivate their authentic voices in engaging with AI technologies, or just chase grades? The authors express cautious optimism that grappling with AI could lead to rethinking educational structures for the better.

A Textual Transaction: The Construction of Authorship in AI Policy Statements
James P. Purdy

In response to generative AI, many academic journals and publishers quickly updated their submission policies. These policies merit careful analysis because, in arguing generative AI cannot be listed as an author, these policies define what authors, and by extension writing, are and should be. We should care about these definitions because they are at the core of our work as computers and writing teacher-scholars. In this chapter, I situate these policy responses to ChatGPT in relation to early computers and writing scholarship, including Burns (1983), Herrington and Moran (2001), and Baron (2000). Based on close reading and content analysis of ten journal and publisher AI policies published within six months of ChatGPT’s initial public release, I found policies to limit authors to people, allow inclusion of some AI-generated content under certain conditions and citation guidelines, and frame writing as a textual product performing transactional functions (in Britton’s (1982) terms). They focus on what happens to the writing we create over what happens when we no longer create our writing. Missing from these policies is what intellectual growth and knowledge production are lost when AI writes the prose that circulates in academic publications. We as computers and writing teacher-scholars still have work to do to circulate more broadly the notion that the technology of writing makes meaning.

Reports from the Field: Classes & Students Using AI

Reconsidering WritingPedagogy in the Era of ChatGPT: Results of a Usability Study of ChatGPT in Academic Writing
Lee-Ann Kastman Breuch, Asmita Ghimire, Kathleen Bolander, Stuart Deets, Alison Obright, and Jessica Remcheck

Generative AI technologies have advanced and become more accessible, drawing the attention of writing instructors across the world. Generative AI technologies such as ChatGPT are called “pre-trained” large language models (LLMs). These models are trained by large data sets to predict next words in phrases and sentences based on language patterns. With the emergence of ChatGPT, new questions about writing, writing pedagogy, and writing processes have emerged. Scholars in writing studies seem to embrace this future, outlining how our writing theory and practice might shift and change, while at the same time noting caution and ethical considerations. This chapter shares results of a study of student impressions of ChatGPT conducted at a large midwestern university. We were inspired by the question: “How are undergraduate students understanding ChatGPT as an academic writing tool? “ We designed a contextual usability inquiry study to investigate how students are thinking about ChatGPT. We recruited 32 undergraduate students across a range of majors at a large university to participate in a usability study of ChatGPT. Using a contextual inquiry approach (Barnum, 2010), we asked students to complete five tasks using ChatGPT and reflect on their impressions of ChatGPT texts. Students rated ChatGPT texts in terms of expectations, satisfaction, credibility, and relevance. Students consistently rated ChatGPT texts as high in relevance and expectations, but lower in terms of satisfaction and credibility. These ratings convey complex and nuanced student impressions of ChatGPT. When asked if they would use ChatGPT text as their homework, a majority of students answered negatively. When asked if they would use ChatGPT texts to generate ideas for a future text they might write, a majority of students responded positively, demonstrating a strong connection between ChatGPT and writing process. Students articulated thoughtful critiques about ChatGPT and its future role in academia. Questions addressed access (“is it free?”); mechanics (“how does it work?”); credibility (“where does it get its information?”); and plagiarism (“is it cheating?”). We advocate a “critical AI literacy” approach and ChatGPT integration in writing classes through structured exercises (Anson & Straume, 2022). We endorse an ethical approach to AI (Fjeld, Achten, Hilligoss & Srikumar, 2020) that invites students to use ChatGPT and critically evaluate its texts.

ChatGPT is Not Your Friend: The Importance of AI Literacy for Inclusive Writing Pedagogy
Mark C. Marino

Make it So: What Students at a Large Public University Really Think about Generative AI
Jeanne Law-Bohannon, John C. Havard, and Laura Palmer

The emergence of generative artificial intelligence (gen-AI) has sparked significant discourse in academia regarding its impact on student writing. This chapter presents findings from a study conducted at Kennesaw State University (KSU) aimed at illuminating first-year students' perceptions of gen-AI and measuring student attitudes toward gen-AI in both academic and personal writing contexts. Using a mixed-method approach, our team designed and administered an eight-question survey on these topics via Qualtrics, obtaining 942 responses from students enrolled in ENGL1101 and 1102 and an additional 81 responses from students enrolled in TCOM2010, a second-year technical writing course.

Though initial quantitative findings indicated a high awareness of gen-AI among students, 90% of respondents reporting familiarity with tools like ChatGPT, 73% of respondents indicated they never use gen-AI for academic purposes. Additionally, a significant portion of students (43%) expressed uncertainty about the future role of gen-AI in writing, and opinions remained divided on the ethics of AI use in academia—58% of respondents felt that gen-AI use only "sometimes" constitutes cheating.

Qualitative analysis further highlighted the complexity of student attitudes, revealing nuanced perspectives among students. While concerns about plagiarism, loss of originality, and AI as undermining critical thinking skills were present, many respondents acknowledged gen-AI's value in brainstorming, improving writing quality, or generalized tutoring. Interestingly, technical communications students were more accepting of gen-AI than first-year writing students across contexts, reflecting their comfort integrating new technologies in their work. Overall, our study underscores the need for ongoing dialogue about AI and the development of pedagogical strategies to address the ethical and practical implications of gen-AI in education. We call for resources to support educators in developing curricula that foster critical AI literacy and ensure students are prepared to navigate the challenges and opportunities presented by this technology. By providing this comprehensive snapshot of student perceptions at a large public institution, we aim to contribute valuable insight into gen-AI's placement in higher education. Our future research will follow this placement, tracking continued changes in student attitudes and exploring the integration of gen-AI in writing instruction.

Style: Comparing AI and Human Approaches to Style

LLMs for Style Pedagogy
Christopher Eisenhart

How well can an LLM of this generation do the editing and revision work of style? At first glance, it might seem like these LLM tools would be well equipped to succeed at revision and editing, at least for academic and journalistic prose in Standard Written English. Generally, we take LLMs to be pretty good at first-draft prose, but requiring human authors to revise with an eye toward context and purpose. Since much of the practice work of learning revision and editing can be done on decontextualized sentences and passages—in curriculum such as Joseph Williams’ Style—we might expect the LLMs to be able to perform the tasks of revising for clarity pretty successfully. In this study, I put ChatGPT 3.5 through the exercises of the Williams curriculum. I report on my analysis of the findings, including those places where CGPT performed as well as we might hope a student would, and also those places where its struggles were similar to many new students’, especially where context and inference are required for successful revision.

Stylistics Comparison of Human and AI Writing: A Snapshot in Time
Christopher Sean Harris, Evan Krikorian, Tim Tran, Aria Tiscareño, Prince Musimiki, and Katelyn Houston

Multimodal Composing: AI Text-to-Image Applications

Composing the Future: Speculative Design and AI Text-to-Image Synthesis
Jamie Littlefield

This chapter proposes speculative design as a critical method for engaging with AI text-to-image synthesis. While the purpose of generative AI is to produce something new, the process itself is inextricably bound to the old. Generating a new image through AI is an act of engaging with textual history. In practice, Large Language Models (LLMs) like ChatGPT and text-to-image synthesis tools such as Midjourney are fundamentally rooted in the past, trained on vast datasets that encapsulate historical texts, images, and multimedia. AI’s entrenchment in the past generates a sort of algorithmic resistance to change, creating synthetic output that uncritically re-creates values and social structures.

Drawing on case studies from urban communication, the chapter demonstrates how speculative design can help communicators notice and challenge AI’s entanglement with the past. Speculative design is a method used by designers, writers, and artists to envision and enact possible futures rather than practical solutions for the present. This approach goes beyond traditional design by questioning societal norms and considering the broader impacts of technology and innovation on our lives. The tools of speculative design are uniquely equipped to help writers critically consider the ways that AI is bound up in time, continually replicating the discourses of the past and artificially projecting limitations onto the ways humans will experience the future. In the case of urban communications, speculative design projects can help non-profits and grassroots organizations persuade the public by presenting images that deviate from the detrimental patterns of the past. In the writing classroom, speculative design practices can provide students with concrete approaches to analyzing and composing AI-generated images that demonstrate awareness of the ways AI reproduces the past.

A speculative approach to generative AI can help the field of composition studies further examine what it means to exist as assembling beings and reflect on the ways that new things (especially new futures) emerge from the fragments of the old.

Teaching about Technology Bias with Text-To-Image Generative AI
Sierra Parker

This chapter analyzes biases undergirding and (re)produced by text-to-image generative AI compositions, presenting a critical approach that engages technology bias and visual pedagogies of artificial intelligence for the rhetoric and writing classroom. By analyzing examples of text prompt inputs and image outputs from two AI models (Dall-E 2 and Bing Image Creator), I illustrate how bias informs the process of composing AI images through not only the user but also the technology. I present examples of the ways that AI images direct the viewer, offer strategies for using this technology meaningfully in the classroom as an object of critical analysis, and indicate how engagement with these AI can promote technological multiliteracies and responsible practice that extends heuristically beyond a single iteration of technology.

In this chapter, I argue that text-to-image generative AI presents a 21st-century pedagogy of sight for critical investigation, using Jordynn Jack’s concept “pedagogy of sight” to unveil the implicitly functioning visual register of AI generated images. This visual register makes the objects’ persuasive power less overt than text, requiring viewers to interrogate taken for granted visual grammars to interpret the biases and effects of an image. My analysis of AI output engages more than just representational biases, looking also at genre influences and cultural perspectives that direct how viewers interpret AI output.

I end the chapter by offering guiding prompts for classroom activities and possibilities for scaffolding AI image content within rhetoric and writing courses more broadly through semester-long discussions about technology, research, and ethics informed by existing composition scholarship about network bias and the ideologies underlying interfaces and visual design. Analyzing bias in text-to-image generative AI alongside these broader conversations upheld by composition scholarship can cultivate students’ rhetorical and critical literacies in a way that allows students to engage ethically with technologies and digital composition more broadly.

Theory: How AI Impacts Rhetoric and Ethics

Rhetorical DIKW: Knowledge-Work Pedagogy for the Age of AI and Beyond
Patrick Love

Generative AI and LLMs, as programs that automate producing novel information on-demand from a user’s prompt, herald a new would-be crisis in the humanities and composition theory and pedagogy because of their expedient ability to produce prose on-command. Because ChatGPT is a black box that appeared to the vast majority of onlookers as without precedent, excitement and anxiety freely mix in reactions to its existence and progress. As such, scholars and faculty need both theory and practice to inform new pedagogical decisions in composition and other writing classes. This chapter presents key concepts from Information Theory that inform Generative AI’s function as a chatbot that ‘generates’ responses by selecting and remixing from training data to produce responses to user requests. Namely, this chapter presents the DIKW pyramid, a graphic that Information Theory uses to rationalize the theoretical relationships between data, information, knowledge, and wisdom as part of teaching machines to participate in knowledge-work. This chapter demonstrates connections between DIKW, circulation theory, and active learning pedagogy in order to present a version of the DIKW pyramid crafted for composition classes and rhetoricians to use as a boundary object to further connections and partnerships between humanities and STEM disciplines, professionals, and students. The chapter concludes with examinations of how the pyramid and its associated language can help faculty discuss ChatGPT and LLMs with classes and start forming their own inquiries and assignments that utilize or challenge LLMs in their own work and with students. Ultimately, this chapter presents metalanguage to help faculty critically and productively engage with Generative AI to calm anxiety and emphasize the new importance of composition classes in the age of AI and beyond.

Interfacing Chat GPT: A Heuristic for Improving Generative AI Literacies
Desiree Dighton

This study explores the implications of ChatGPT and similar generative AI technologies in the context of writing studies, emphasizing the need for developing Critical AI Literacies. The research draws on historical and contemporary theories of interface design and usability, including the work of Selfe and Selfe, Mel Stanfill, and Corrine Jones, to analyze how these technologies shape user interactions and perpetuate dominant cultural values. The interface of ChatGPT, particularly its conversational design, is examined for its affordances and constraints, revealing how it subtly directs user behavior and engagement. By integrating heuristic development and analysis into classroom practices, this study aims to enhance students' understanding of AI technologies, fostering critical engagement and agency. The findings highlight the importance of situating AI tools within broader socio-cultural contexts, advocating for a more inclusive and reflective approach to integrating AI in educational settings. This paper argues that by critically and collaboratively examining the interfaces of AI technologies, educators and students can better navigate the complex interplay between technology, culture, and writing, ultimately empowering users to reclaim agency in their digital interactions.