
Introduction
Nupoor Ranade Carnegie Mellon University
Douglas Eyman George Mason University
AI, it seems, is everywhere. And it is safe to say that recent developments in generative AI applications, starting with ChatGPT, have created a pivotal moment in our field. Some faculty find these new technologies and tools exciting and powerful agents that will improve teaching and learning activities for both students and instructors. Others are deeply resistant, citing environmental, ethical, and misinformation concerns.
In the year 2020, long before ChatGPT was released, Vauhini Vara published a piece in The Believer in which she used a GPT-3 application to help her write a profound, rich, and highly original piece about her sister's death. She confessed how hard it was for her to write this highly emotional piece. GPTs— Generative Pretrained Transformers, colloquially known as "generative AI," or just "AI"—have advanced language processing capabilities that made it possible for Vara to publish this piece by merely providing the tool some context and conducting some editing later. Examples like this one make clear that generative AI tools have the potential to revolutionize various industries, including composition instruction. Beyond generating content, they can be used to automate tasks, engage with readers (or users) using natural language interactions, streamline workflows, and enhance productivity.
Concerns have been raised about the potential use and misuse of these tools, such as the lack of a mechanism that can test for artificial intelligence's presence in creative works. It came as no surprise when, within a year of ChatGPT's launch, the Writers Guild of America went on strike and novelists like George RR Martin and Jonathan Franzen started pursuing a lawsuit against OpenAI—the company that launched ChatGPT. The ability to generate human-like text also raises ethical questions regarding the spread of misinformation and the creation of deepfakes. To address these concerns, it is crucial to approach the use of ChatGPT and related tools with caution and responsibility. The purpose of this collection is to understand some of the affordances of these tools and address concerns about their use in writing and composition fields. In a sense, this is a snapshot of our field's understanding of these tools in late 2024; it is a record that will facilitate research about what follows in terms of technological developments, regulations, and guidelines surrounding AI.
Of course, another major challenge of engaging with a rapidly evolving technology like generative AI (or any new technology really) is that the pace of change far exceeds the speed of academic publishing. Nonetheless, the chapters in the collection reference older versions of this technology; we believe that we are approaching a point of diminishing returns, even as the training data becomes ever more massive—after all, no matter the input, the system cannot think, understand, evaluate, or feel in any way. It can only produce ever more human-sounding text in response to a prompt. We believe that the work presented here offers practical advice and conceptual frameworks that will remain valuable and relevant even as generative AI continues to grow.
A Very Brief History of Generative AI in Writing Studies
Implementation of software programs labeled "Artificial Intelligence" (AI) has a fairly short history, with the technology being initially introduced during the 1960s in the form of chatbots (Foote, 2022). The first historical example of generative AI was ELIZA, a chatbot created in 1961 by Joseph Weizenbaum that could respond to a human's requests using natural language. During the 1970s the development of AI had stalled due to a lack of technological capacity, such as the extensive storage systems needed to manage high volumes of data and the computing power needed to work with those data sets. However, work on machine learning tools was still in progress and automation systems that could take on repetitive tasks, like answering the phone and transferring calls, were common. In the 1990s, AI research took off again due to the multilayered artificial neural networking algorithms and powerful graphic processing units (GPUs) invented for computer gaming (Foote, 2022). Conversational agents like Siri and Cortana became popular starting in 2011 and the journalism industry saw a big spike in the use of natural language processing technology for generating content in standardized genres such as sports columns in newspapers (Matsumoto et al., 2007; Clerwall, 2014).
The biggest changes we have seen since then are the move toward multimodality, increased accuracy in image recognition, and (as of recently) generative AI, as AI applications can process text, images, video, and audio and then produce multimodal outputs through chat functions. Nevertheless, the majority of applications still focus on textual production at this point. The astute reader will notice that the chapters in this collection are oriented toward textuality more so than being multimodal works: this is in part a response to the inherently textual nature of ChatGPT at the time of this writing.
As of Spring 2025, we've noticed that only a relatively small number of faculty are taking the time to understand how generative AI works and how it might be taught in college-level courses across the curriculum. The massive hyperbole offered by the companies that stand to profit the most from pushing the inclusion and use of large language model AI applications hasn't helped, from the CEO of Alphabet (which owns Google) declaring that AI is "more profound than fire or electricity or anything we've done in the past" (CBS News, 2023) to AI researchers claiming that the current systems are capable of being a threat to humanity (both of which stances are exceptionally far-fetched to say the least). Many faculty expect their institutions to provide guidance, but not all institutions are taking even the most basic steps to consider developing policy around these new tools. To be sure, while they can't "think," these AI applications can be effective tutors, useful interlocutors that can help develop ideas and arguments, and they can be used to summarize longer, more complex texts (as long as they aren't too technical). These tools are indeed poised to disrupt our traditional approaches to teaching and learning, and we have yet to see a coherent response to the challenges posed by these technologies.
Research on generative AI in writing studies took off exponentially after the release of ChatGPT in November 2022. According to Open AI, GPT4 called ChatGPT was more inventive and collaborative and could produce, edit, and iterate with users while collaborating with them on artistic and technical writing tasks like songwriting, screenwriting, or identifying their writing style. While AI tools provide convenience, speed and perhaps some grammatical accuracy, students and other academics seem to rely on them to do mundane tasks which require less curiosity, creativity and have low stakes (AlAfnan, et al., 2025). But writing, as extensive research in composition/rhetoric shows, is far more than a concatenationo of grammatical structures. Morrison (2023), following more than a half-century of writing studies research, defines writing as an act that simultaneously communicates, persuades, instructs, and explores. The goal of this collection is to highlight the various aspects of composing that expand the narrow vision of using AI for automating repetitive tasks and instead think about how generative AI applications will work with or against instruction in textual and multimodal composing in writing classes.
We solicited proposals for this collection in early 2023, and there was a concern that the time it takes for development, peer-review, editing, and publication might lead to obsolete understandings of the technologies we are addressing here. Despite the extensive claims made about advancements of the technology by the companies that are creating it, in 2025 we see that generative AI systems are still prone to errors. Researchers at the Tow Center for Digital Journalism found that LLM-based chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead" and that "premium chatbots provided more confidently incorrect answers than their free counterparts" (Jaźwińska and Chandrasekar, 2025). Around the same time, the BBC released a report indicating that generative AI systems often introduced inaccuracies or wholly fabricated information when summarizing news reports (Rahman-Jones, 2025). And biases and stereotypes were still a common problem for outputs in all media, from text to video (Rogers and Turk, 2025).
Despite these shortcomings (and to some extent because of them) we take the position that it is imperative to help students develop critical literacies regarding AI use and misuse, and that the work in this collection provides a foundation for developing those literacies. There are, to be sure, sound reasons to resist incorporating AI in writing classrooms wholesale, and we hope that readers of this collection will also avail themselves of the resources at Refusing GenAI in Writing Studies (Sano-Franchini, McIntyre, and Fernandes, 2024).
Themes of the Collection
This collection is designed for scholars and instructors in the field of Computers and Writing, as well as those in adjacent disciplines such as Rhetoric and Composition, Digital Humanities, and Communication Studies. It delves into the multifaceted intersection of technology and writing, offering insights into the evolving landscape of digital communication and its impact on pedagogy and research. While the chapters in this collection can be read in any order, we have organized the collection into five main themes: histories, policies, applications, multimodal composition, and teaching AI literacies. The corresponding navigation helps readers choose which topics to visit.
By addressing the dynamic interplay between technology, writing, and education, this collection seeks to facilitate a nuanced understanding of the digital age and its transformative potential for the future of scholarship and instruction. We are grateful to the contributors for the wide range of topics. Given the rapid pace of change in AI technologies, our aim was for this collection to be a valuable resource for diverse audiences including faculty, researchers, administrators and policymakers who are grappling with the implications of AI for teaching and learning. The chapters help with perspectives necessary to navigate the complex implications of AI for teaching and learning, fostering informed decision-making and responsible implementation of AI-driven tools and strategies in educational settings.
Histories: C&W Approaches to Disruptive Technologies
We decided to start the collection with a historical approach that situates our current moment within the context of how the field of computers and writing has taken on the challenges of new technologies in the past. In 100 Years of New Media Pedagogy, Jason Palmeri and Ben McCorkle argue that instructors have always used various multimedia technologies in the English Studies classroom, even before the advent of the Internet. Using the findings from analyzing a huge corpus of over 700 articles over 100 years (1912-2012) and providing examples such as audio-visual aids and typewriters in the classrooms, they highlight how varied levels of technological engagement shaped the history of writing. Some popular technologies that have aided the evolution of writing pedagogy include MOOs, OWLs, webtext, hypertext, computer programming (HTML), multimodality, and social media (Palmeri, 2012; Palmeri & McCorkle, 2021; Marlow & Purdy, 2021). Tracing this history helps us understand the evolution of the field with AI.
Some early concerns about technology in writing spaces came from a critical view of technology that defines technology in terms of human–computer interaction and a contextualist view of writing--a scenic view that focuses on production and effects of writing in its political, social, and rhetorical context (Porter, 2002). Early works by Donna Haraway (1991), Jim Porter, (1998) and Pat Sullivan (1991) described the relationship of humans with technology, especially the networked and social nature of work that results from its uses. Writing studies research tended to focus on the intentions of writers and outputs generated through the use of technology rather than focus on the technology itself.
Policies: Programs and Publications
New technologies produce tools for writing that often disrupt prior pedagogical and social norms. The chapters in this section focus on institutional responses to the disruptions posed by generative AI. These policies focus not on what of AI is or does, but on how we can responsibly use these new tools. Rhetoricians have traditionally given more weight the “how” questions over the more instrumental "what" questions or the more philosophical "why" questions. We ask questions such as "how do we use this technology?" and "how will the technology impact writers and their audiences?" The chapters in this section will answer aim to provide some answers to these questions in concrete terms.
Policy making for generative AI has become important for every sector. We have noticed that some institutions have prioritized policies around acceptable uses for students, while others have decided to wait until we have more certainty about how these new tools will actually work in practice. Both students and faculty have been asking for institutional policies to guide generative AI use; the chapters in this section provide a policy model for teaching and an evaluation of policies developed for academic publications.
Reports from the Field: Classes & Students Using AI
Across all levels of education there have been debates about whether AI could be beneficial to students' learning or detrimental to the development of critical thinking skills and creativity. Chapters in this section provide examples of both faculty-focused and student-led examinations of how generative AI might be used in writing courses. We take the position that trying to ban AI use outright is not a helpful response to the challenges posed by these technologies; rather, students must be taught to use them effectively and ethically. Too often we see faculty operating at one of two extreme positions: no AI on one end or extolling an anthropomorphized, nearly sentient system that can be used to do literally any writing task on the other. The aim of chapters in this section is to show how careful consideration of appropriate use in the teaching of writing can help us better understand AI, and, in turn, better teach it as one among many writing tools.
Multimodal Composing: AI Text-to-Image Applications
Most chapters in this collection focus on textual production via generative AI based using natural language processing algorithms. These systems, however, also make it possible to create images from text. Text-to-image generation uses computer vision algorithms that simulate the human visual system so that computers can "see" and comprehend the content of still images and films. Many AI applications now can use sound for input and output, but at the time this collection was formed, sound capabilities were far less common. Chapters in this section evaluate the effectiveness of generative AI for multimodal composing, providing an explanation of how these systems work as well as examples of more and less successful multimodal composition cases.
Teaching AI Literacies
While humans have a distinct edge in the layered, nuanced complexities of communication, AI writing systems certainly have the edge on processing huge volumes of data. But even with seemingly unlimited data points, many AI writing systems are built on an information transfer model of communication that assumes text production is a simple matter of converting raw data into sentences and paragraphs. This model generally obscures the critical role of audience and context and excludes ethics as an element of textual production (McKee & Porter, 2020). Chapters in this section analyze the gaps between human abilities of context detection as compared to those of AI applications and the impact these gaps in rhetorical practice have on the content produced. Chapters in this section provide theoretical frames that can help guide AI users as well as suggesting approaches for critical literacy-focused pedagogies for teaching about and with generative AI.
A Note on Image Headers
For the main header images of each chapter, we provided imagine.art the chapter abstracts we received from their respective authors. We generated images using several different models (labeled in the system as SDLX, Copax Timeless XL, Creative, and others). It has been interesting to see which abstracts prompted images of people and which tried to produce textualized images (most models can't produce actual texts, instead creating text-like glyphs). Interestingly, academic text prompts seemed more likely to produce black and white images. We typically generated beween eight and twelve images and then selected three to combine into the header image for each chapter. We opted to not edit the images (aside from re-sizing), and we did not tweak the prompts to be more effective, as we were curious to see what the system would do with the kinds of texts we provided.
References
AlAfnan, Mohammad Awad, Dishari, Samira, & MohdZuki, Siti Fatimah. (2024). Developing soft skills in the artificial intelligence era: Communication, business writing, and composition skills. Journal of Artificial Intelligence and Technology 4(4), https://doi.org/10.37965/jait.2024.0496.
Clerwall, C. (2014). Enter the robot journalist. Journalism Practice, 8(5), 519–531.
CBS News. (2023, April 16). Interview with Sundar Pichai. https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/.
Sano-Franchini, Jennifer, Megan McIntyre, and Maggie Fernandes. 2024. Refusing GenAI in writing studies: A quikstart guide. https://refusinggenai.wordpress.com/.
Foote, Keith D. (2022). The history of machine learning and its convergent trajectory towards AI. In Silvio Carta, ed., Machine learning and the city: Applications in architecture and urban design (pp. 129–142). John Wiley and Sons.
Handa, Carolyn, ed. (1990). Computers and Community: Teaching Composition in the Twenty-First Century. Boyton/Cook.
Haraway, Donna. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs, and women: The reinvention of nature (pp. 149–181). Routledge.
Hawisher, Gail E., and Paul LeBlanc, eds. (1992). Re-Imagining Computers and Composition: Teaching and Research in the Virtual Age. Boyton/Cook.
Hawisher, Gail E., and Cynthia L. Selfe, eds. (1991). Evolving Perspectives on Computers and Composition Studies: Questions for the 1990s. NCTE.
Hawisher, Gail E., and Cynthia L. Selfe, eds. (1989). Critical Perspectives on Computers and Composition Instruction. Teacher's College Press.
Holdstein, Deborah H., and Cynthia L. Selfe, eds. (1990(1990). Computers and Writing: Theory, Research, Practice. MLA.
Jaźwińska, Klaudia, and Aisvarya Chandrasekar 2025, March 6. AI search has a citation problem. Columbia Journalism Review. https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php.
Marlow, Jennifer, and Purdy, James P. (2021). Are we there yet? Computers and the Teaching of Writing in American Higher Education—Twenty Years Later. Computers and Composition Digital Press. https://ccdigitalpress.org/book/arewethereyet/.
Morrison, Aimée. (2023). Meta-Writing: AI and writing. Composition Studies, 51(1), https://compstudiesjournal.com/wp-content/uploads/2023/06/morrison.pdf.
Matsumoto, H., Nakayama, T. Harada, and Y. Kuniyoshi. (2007). Journalist robot: Robot system making news articles from real world. In Proceedings of the IEEE/RJS International Conference on Intelligent Robots and Systems (1234–1241). Institute of Electrical and Electronics Engineers (IEEE).
McKee, Heidi, & Porter, James. (2020). Ethics for AI writing: The importance of rhetorical context. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 110–116). Association for Computing Machinery (ACM).
Rahman-Jones, Imran. 2025, February 11. AI chatbots unable to accurately summarize news, BBC finds. BBC. https://www.bbc.com/news/articles/c0m17d8827ko.
Rogers, Reece and Vicoria Turk. 2025, March 23. OpenAI's Sora is plagued by seist, racist, and ableist biases. Wired. https://www.wired.com/story/openai-sora-video-generator-bias/.
Porter, James E. (1998). Rhetorical ethics and internetworked writing. Ablex.
Sullivan, Patricia A. (1991). Taking control of the page: Electronic writing and word publishing. In Gail E. Hawisher & Cynthia L. Selfe (Eds.), Evolving perspectives on computers and composition studies: Questions for the 1990s (pp. 43–64). National Council of Teachers of English.
Vara, Vauhini. (2022, 21 September). Confessions of a viral AI writer. Wired. https://www.wired.com/story/confessions-viral-ai-writer-chatgpt/.
Chapter Abstracts
The chapters in this collection were originally drafted in 2023 and 2024, often reporting on projects that relied on earlier versions of GPT applications (typically ChatGPT 3.5 and 4.0 or Claude 2.0). However, no chapter relies entirely upon the specifics of any given GPT application or version. The works presented here have been reviewed and edited in 2025, and the insights and findings presented in this collection remain relevant and continue to convey important findings. In the computers and writing field, we have a long history of producing works about technological innovations that don't rely solely on specific new products so much as investigating the processes and implications of new communication technologies on writing practices (see, e.g. Hawisher and Selfe, 1989; Holdstein and Selfe, 1990; Hawisher and Selfe, 1991; Hawisher and LeBlanc, 1992). Our aim in this collection is to continue that tradition of critical analysis of new technologies without falling into the trap of reporting on or assessing specific instances or applications generative AI.
Histories | Policy | Pedagogies | Multimodal | Literacies
Histories: C&W Approaches to Disruptive Technologies
What We already Know: Generative AI and the History of Writing With and Through Digital Technologies
Anthony Atkins and Colleen A. Reilly
Atkins and Reilly examine three central themes present in both past and current literature in the fild of computers and writing: the challenges posed to conceptions of writing and authorship; the access and accessibility implications of information and communication technologies; and the degree to which technologies reveal and mask their mediation of content. The scholarship addressing these themes is as relevant for working and teaching with AI just as it was for working with MOOs (English, 1998), Wikipedia, and a myriad of other information and communication technologies. They argue that the field can and should harness this scholarly legacy to help faculty and students navigate the evolving context for writing, composing, editing and design necessitated by the introduction of generative AI.
GPT Applications: All Vendors/All Versions
The Black-Boxed Ideology of Automated Writing Evaluation Software
Antonio Hamilton and Finola McMahon
Hamilton and Finola draw on the decades-long conversation regarding automated writing evaluation technologies (AWE). These technologies are often heavily black-boxed; consequently, instructors, students, and writers, often use these technologies without fully understanding their function and their impact on writing. In a study of current AWE applications, the authors found three central themes: 1.) The user's technical illiteracy being used against them to prevent a full understanding of the programs prior to purchase, 2.) The programs' websites obscuring details about how the algorithms function, and 3.) Black-boxing as an appeal to current-traditional rhetoric and the use of static abstractions in writing feedback.
GPT Applications: All Vendors/All Versions
Histories | Policy | Pedagogies | Multimodal | Literacies
Policy: Programs & Publications
Drafting a Policy for Critical Use of AI Writing Technologies in Higher Education
Daniel Frank and Jennifer Johnson
Frank and Johnson present a dialectical conversation reflecting on the incorporation of generative AI tools in the writing classroom. They trace the development of the UCSB Writing Program's Policy on ChatGPT and AI Writing, which aims to provide ethical guidance and support for faculty and students in using these technologies. Through faculty workshops and discussions, Frank and Johnson shaped the key principles of the UCSB policy: 1) Integrating AI as one of many supportive feedback tools; 2) Promoting academic integrity through transparency about AI usage; 3) Fostering critical thinking about the tools' biases and limitations; 4) Cautioning against over-reliance on AI detection.
GPT Applications: Initially ChatGPT 3.5 but applies to All Vendors/All Versions
The Construction of Authorship and Writing in Journal and Publisher AI Policy Statements
James P. Purdy
Purdy examines updates to submission policies of academic journals and publishers prompted by the emergence of generative AI. In arguing that generative AI cannot be listed as an author, these policies define what authors, and by extension writing, are and should be. In this chapter, these policy responses are situated in to relation to early computers and writing scholarship, including Burns (1983), Herrington and Moran (2001), and Baron (2000). Based on close reading and content analysis of ten journal and publisher AI policies published within six months of ChatGPT’s initial public release, Purdue found policies to limit authors to people, allow inclusion of some AI-generated content under certain conditions and citation guidelines, and frame writing as a textual product performing transactional functions.
GPT Applications: All Vendors/All Versions
Histories | Policy | Pedagogies | Multimodal | Literacies
Reports from the Field: Classes & Students Using AI
Reconsidering Writing Pedagogy in the Era of ChatGPT: Results of a Usability Study of ChatGPT in Academic Writing
Lee-Ann Kastman Breuch, Asmita Ghimire, Kathleen Bolander, Stuart Deets, Alison Obright, and Jessica Remcheck
Inspired by the question "How are undergraduate students understanding ChatGPT as an academic writing tool?" This chapter shares results of a study of student impressions of ChatGPT conducted at a large midwestern university. Thirty-two undergraduate students participated in a contextual usability inquiry study, completing five tasks using ChatGPT and rating the outputs in terms of expectations, satisfaction, credibility, and relevance. Students consistently rated ChatGPT texts as high in relevance and expectations, but lower in terms of satisfaction and credibility. Based on the results of the study, the authors advocate a “critical AI literacy” approach that invites students to use ChatGPT and critically evaluate its texts.
GPT Applications: ChatGPT 3.5
ChatGPT is Not Your Friend: The Importance of AI Literacy for Inclusive Writing Pedagogy
Mark C. Marino
Marino reflects on his discoveries from a summer intensive first-year writing course, later revised for sections of an advanced writing course, at the University of Southern California in 2023 which focused on Machine-assisted Writing. The course focused on both understanding and using ChatGPT 3.5 and other LLMs for everything in class, from generating a start of class check-in question, which it did quite well, to augmenting our research methods, which had mixed results. Ultimately these experimental lessons revealed two important findings: AI tools present yet another divisive wedge between the digitally literate haves and the less literate havenots, and as students' understanding of these systems increases, the potential for productive, creative, and critical use of these tools likewise increases. This chapter details the experimental assignments, in class work, and theoretical bases that led to those findings.
GPT Applications: ChatGPT 4.0, but assignments apply to All Vendors/All Versions
Mind the Gaps: Evaluating Student Perceptions on GenAI and the Future of Writing
Jeanne Law, James Blakely, John C. Havard, and Laura Palmer
This chapter presents findings from a study conducted at Kennesaw State University (KSU) aimed at illuminating first-year students' perceptions of gen-AI and measuring student attitudes toward gen-AI in both academic and personal writing contexts. Though initial quantitative findings indicated a high awareness of gen-AI among students, many respondents indicated they never use gen-AI for academic purposes. Additionally, a significant portion of students expressed uncertainty about the future role of gen-AI in writing, and opinions remained divided on the ethics of AI use in academia. Technical communications students were more accepting of gen-AI than first-year writing students across contexts, reflecting their comfort integrating new technologies in their work. This study underscores the need for ongoing dialogue about AI and the development of pedagogical strategies to address the ethical and practical implications of gen-AI in education.
GPT Applications: All Vendors/All Versions
Histories | Policy | Pedagogies | Multimodal | Literacies
Multimodal Composing: AI Text-to-Image Applications
Composing the Future: Speculative Design and AI Text-to-Image Synthesis
Jamie Littlefield
This chapter proposes speculative design as a critical method for engaging with AI text-to-image synthesis. Large Language Models (LLMs) like ChatGPT and text-to-image synthesis tools such as Midjourney are fundamentally rooted in the past, trained on vast datasets that encapsulate historical texts, images, and multimedia. AI’s entrenchment in the past generates an algorithmic resistance to change, creating synthetic output that uncritically re-creates values and social structures. Drawing on case studies from urban communication, the chapter demonstrates how speculative design can help communicators notice and challenge AI’s entanglement with the past. In the writing classroom, speculative design practices can provide students with concrete approaches to analyzing and composing AI-generated images that demonstrate awareness of the ways AI reproduces the past.
GPT Applications: DALL-E 2, DALL-E 3, Midjourney (2023)
Teaching about Technology Bias with Text-To-Image Generative AI
Sierra S. Parker
This chapter analyzes biases undergirding and (re)produced by text-to-image generative AI compositions, presenting a critical approach that engages technology bias and visual pedagogies of artificial intelligence for the rhetoric and writing classroom. By analyzing examples of text prompt inputs and image outputs from two AI models (Dall-E 2 and Bing Image Creator), Parker illustrates how bias informs the process of composing AI images through not only the user but also the technology. Presenting examples of the ways that AI images direct the viewer, the author offers strategies for using this technology meaningfully in the classroom as an object of critical analysis, providing guiding prompts for classroom activities and possibilities for scaffolding AI image content within rhetoric and writing courses.
GPT Applications: DALL-E 2, Bing Image Creator (2023)
Histories | Policy | Pedagogies | Multimodal | Literacies
Teaching AI Literacies
LLMs for Style Pedagogy
Christopher Eisenhart
Eisenhart examines the capacities of generative AI applications to engage in the editing and revision work of style by putting ChatGPT 3.5 through the exercises presented in Joseph Williams's Style: Lessons in Clarity and Grace. The chapter provides an analysis of the findings, including places where ChatGPT performed as well as a human student would, and also those places where its struggles were similar to those of many first-year writing students, especially where context and inference are required for successful revision.
GPT Applications: ChatGPT 3.5
Teaching Knowledge Labor and Literacy for the Age of AI and Beyond with Rhetorical Information Theory
Patrick Love
This chapter presents key concepts from Information Theory that inform Generative AI's function as a chatbot that "generates" responses by selecting and remixing from training data to produce responses to user requests. Namely, this chapter presents the DIKW pyramid, a graphic that Information Theory uses to rationalize the theoretical relationships between data, information, knowledge, and wisdom as part of teaching machines to participate in knowledge-work. The chapter demonstrates connections between DIKW, circulation theory, and active learning pedagogy to present a version of the DIKW pyramid crafted for composition classes and rhetoricians to use as a boundary object that can further connections and partnerships between humanities and STEM disciplines, professionals, and students.
GPT Applications: All Vendors/All Versions
Interfacing Chat GPT: A Heuristic for Improving Generative AI Literacies
Desiree Dighton
This study explores the implications of ChatGPT and similar generative AI technologies in the context of writing studies, emphasizing the need for developing critical AI literacies. The research draws on historical and contemporary theories of interface design and usability, including the work of Selfe and Selfe, Mel Stanfill, and Corrine Jones, to analyze how these technologies shape user interactions and perpetuate dominant cultural values. The interface of ChatGPT, particularly its conversational design, is examined for its affordances and constraints, revealing how it subtly directs user behavior and engagement. By integrating heuristic development and analysis into classroom practices, this study aims to enhance students' understanding of AI technologies, fostering critical engagement and agency. The findings highlight the importance of situating AI tools within broader socio-cultural contexts, advocating for a more inclusive and reflective approach to integrating AI in educational settings.
GPT Applications: ChatGPT 3.5, ChatGPT 4.0, applies to All Vendors/All Versions
Contributors
Anthony Atkins
Anthony T. Atkins is an Associate Professor of English at University of North Carolina Wilmington. He teaches courses in rhetoric and professional writing, document design, and social media. He is currently the faculty associate in the Center for Teaching Excellence.
James Blakely
James Blakely is a PhD Student at the Ohio State University, where his research focuses on digital rhetorics, cultural studies, and writing pedagogy. He has published work in outlets such as Computers and Composition and the Sweetland Digital Rhetoric Collaborative's Blog Carnival, as well as presented research at multiple national conferences.
Kathleen Bolander
Kathleen Bolander is a PhD student in the Department of Writing Studies at the University of Minnesota–Twin Cities. She is also a graduate mentor with UMN Athletics. Her main area of research is digital rhetoric and the rhetoric of silence.
Stuart Deets
Stuart Deets is a PhD student in the Department of Writing Studies at the University of Minnesota–Twin Cities. His research focuses on communicating public policy.
Desiree Dighton
Desiree Dighton is an assistant professor at East Carolina University. Her research centers on the rhetorical and cultural consequences of design. Leveraging interdisciplinary research methods, she’s contributed journal articles and book chapters to illuminate the historical roots and contemporary consequences of designs from computer technologies to neighborhood master plans. She encourages rhetorical and cultural interventions in design work and more inclusive participation in our shared spaces of living and working.
Christopher Eisenhart
Christopher Eisenhart is Professor of Rhetoric and Communication at the University of Massachusetts, Dartmouth. He studies scientific, technical, and public discourse and teaches in UMD's Master's program in Professional Writing & Communication.
Douglas Eyman
Douglas Eyman is the senior editor and publisher of Kairos. He teaches courses in digital rhetoric, technical communication, web authoring, and professional writing at George Mason University. His current research interests include the affordances and constraints of composing with generative AI/LLMs, new media scholarship, teaching in digital environments, and video games as sites of composition. With Nupoor Ranade, he recently co-edited a special issue of Computers and Composition on "Composing with AI."
Daniel Frank
Daniel Frank teaches composition and multimedia rhetorics, with research interests in generative AI, game-based pedagogy, and connected learning. He helps students find their passions as they create, play, and communicate research, argumentation, and writing across genres, networks, and digital communities.
Asmita Ghimire
Asmita Ghimire is a PhD candidate in the Rhetoric, Scientific and Technical Writing program at the University of Minnesota–Twin Cities. Her research areas are Rhetoric, International and Intercultural Technical Communication, and Transnational Feminist Studies
Antonio Hamilton
Antonio Hamilton is a doctoral candidate in English, with a concentration in Writing Studies at the University of Illinois Urbana-Champaign. His research centers on the impact of generative AI on the writing process, formulation of writers' identity, and understanding of diverse writing styles.
John C. Havard
John C. Havard is Professor of Early American Literature at Kennesaw State University. He is the author of Hispanicism and Early US Literature: Spain, Mexico, Cuba, and the Origins of US National Identity (U of Alabama Press, 2018) and co-editor of Spain, the United States, and Transatlantic Literary Culture throughout the Nineteenth Century (Routledge, 2021).
Jennifer K. Johnson
Jennifer K. Johnson teaches first year composition, professional writing, and a variety of upper-division writing courses. Her current research interests include TA training, genre theory, and disciplinarity -- particularly in terms of the relationship between composition and literary studies. She has also developed a newfound interest in how LLMs can be utilized in the classroom.
Lee-Ann Kastman Breuch
Lee-Ann Kastman Breuch is a Professor in the Department of Writing Studies at the University of Minnesota and Associate Dean for Undergraduate Education in the College of Liberal Arts. Her research investigates rhetoric and digital writing in a variety of settings such as classrooms, professional organizations, and social media. She teaches courses in technical communication, digital writing, usability research, and evaluation of online interfaces.
Jeanne Law
Jeanne Beatrix Law is a professor at Kennesaw State University. Her research includes multimodal languaging and generative AI technologies for writers. Her public scholarship includes scaling historical rhetorics for diverse audiences and emergent modalities. Jeanne serves as a faculty mentor for the AAC&U’s AI Pedagogy Institute.
Jamie Littlefield
Jamie Littlefield studies technical communication and rhetoric at Texas Tech University. Her research examines the impact of technical communication on urban development, focusing on housing, street design, and public space. As a Google Fiber Digital Inclusion Fellow, Jamie partnered with the Nonprofit Technology Network to bridge the digital divide within communities.
Patrick Love
Patrick Love is an Assistant Professor of English and the Associate Director of Composition at Monmouth University. His other work is published in Technical Communication Quarterly and SIGDOC and by Palgrave Macmillan. Patrick received his PhD in Rhetoric and Composition from Purdue University in 2019.
Mark C. Marino
Mark C. Marino is a Professor of Writing at the University of Southern California, where he directs the Humanities and Critical Code Studies Lab. Since 2008, he has been the Director of Communication for the Electronic Literature Organization. His latest books are Critical Code Studies (2020) and Hallucinate This! an authoritized autobotography of ChatGPT (2023).
Finola McMahon
Finola McMahon is a doctoral candidate in English, with a Writing Studies concentration and a Queer Studies minor at the University of Illinois Urbana-Champaign. They research writing norms and the regulation of writers’ behavior in community spaces, online and in-person, as well as how writers queer and complicate these norms.
Alison Obright
Alison Obright is a PhD student in the Department of Writing Studies at the University of Minnesota - Twin Cities. Her research focuses on the rhetoric of science, pseudoscience, and technology.
Laura Palmer
Laura Palmer is a Professor of Technical Communication and Rhetoric. Her current work examines the intersections of technology and humanities thinking and technology with a focus on accessibility.
Sierra S. Parker
Sierra S. Parker is currently a PhD candidate in English and Visual Studies at Penn State University. She studies sensory rhetoric, focusing primarily on the visual across various media and applications. Her study of visuality has engaged AI and digital culture, health campaigns and drug addiction, and archival practices.
James P. Purdy
James P. Purdy is a Professor of English/Writing Studies and Director of the University and Community Writing Centers at Duquesne University. His two co-written books, four edited volumes, and numerous scholarly articles and chapters share his research on the technologically mediated research-writing practices of scholars from students to professors.
Nupoor Ranade
Nupoor Ranade is an Assistant Professor of Rhetoric and Technical Communication at Carnegie Mellon University. Her research explores the histories of audiences in rhetoric and technical communication, ethical AI and content strategy. She began analyzing the landscape of AI research with a humanities lens in 2019, well before the advent of ChatGPT, and has published several articles on AI in Computers and Composition, Technical Communication and AI & Society, among others. She recently co-edited a special issue of Computers and Composition on "Composing with AI" with Douglas Eyman.
Colleen A. Reilly
Colleen A. Reilly is Professor of English at the University of North Carolina Wilmington. She teaches undergraduate and graduate courses in technical writing and editing, science writing, and genders, sexualities, and technologies. From Spring 2018 through Spring 2023, she coedited the open-access Journal of Effective Teaching in Higher Education.
Jessica Remcheck
Jessica Remcheck is a PhD student in the Department of Writing Studies at the University of Minnesota–Twin Cities. Her research areas include rhetoric of health and medicine and rhetoric of science.