Interfacing Chat GPT
Desiree Dighton
Chat GPT is to Writing as a Parent is to a Child
If you asked most college students how ChatGPT works, I’d guess that most would answer like most mine did recently: Artificial Intelligence. Even when I asked students for more details and specifics, most students shrugged or repeated “AI” with air quotes and question marks in their voices. Gallagher (2023) found the language we use to talk about generative AI (GAI) “indicates alignments with certain ideologies” (p. 151). He said more technical terms like Machine Learning (ML) may help students have greater awareness of its data and processing aspects while Artificial intelligence (AI) may help them understand how public “hype” is created around these technologies and products.
Writing studies has a long history of expertise in the ideological function of language. We teach students to grapple with language’s nuance and power and to use language knowing it could alter real-world consequences (Lakoff & Johnson, 1980; Burke, 1969). We also teach them that, especially with our internet-enabled global world, our original messages could circulate to new audiences and have new benefits and harms we can’t always anticipate (Gries, 2018; Jones, 2021). What do our students imagine when they hear a phrase like “powered by AI”? We can work to illuminate how these phrases circulate Selfe’s dominant narratives and operate as Burkean frames of acceptance, but more basically, we can talk about how this language makes them feel, how it works to sell products and levers the public towards uncritical use and consumption. Antoine Francis Hardy (2020) stated, “public rhetors use frames as a means by which they adopt attitudes towards society and prescribed said frames for audiences” (p. 30). Frames, metaphors, and narratives are the cultural languages by which we communicate and understand the world. They are also the rhetorical mechanisms by which GPT circulates, gains widespread acceptance, and, increasingly, becomes the status quo.
In “AI as Agency without Intelligence,” Floridi (March 2023) explained that contrary to perceptions about the advanced intelligence and functionality of ChatGPT-4, it does not understand as humanly as it would like us to believe. Instead, it “operates statistically—that is, working on its formal structure, and not on the texts they process” (p. 14–15). ChatGPT doesn’t evaluate texts as humans might, analyzing the nuances of authorial credentials, publication characteristics, writing genre conventions, scholarly peer review, and other types of publishing processes and ethics. In GAI interfaces like ChatGPT’s, where is the human agency to “pay close attention to how it [a text] was produced, why and with what impact?” (Floridi, 2023, p. 15). Instead, GAI interfaces have been designed to obscure their process and prevent users from participating more fully in shaping their personalized outputs.
Like all interfaces, ChatGPT’s design can and does change through developer and company implemented adjustments. These redesigns could take many forms, including versions that would enable greater user agency and system transparency. For instance, when I prompted ChatGPT to provide a list of the most influential computers and writing scholars, I had to further prompt it to be more inclusive by asking for “marginalized or non-white scholars.” GPT was cheerful about regenerating its response, stating, our field “has been enriched by the contributions of marginalized and non-white scholars, who have brought unique perspectives and important critiques related to race, culture, identity, and digital spaces” (ChatGPT, October 20, 2023). If inclusion were given importance by its designers and programmers, perhaps ChatGPT could calculate and provide results to users within more inclusive calculation metrics or give users the interface controls to indicate they’d like more inclusive data metrics and results. Prompt engineering is presently the only way to try to generate more inclusive results, which puts the onus on the user, not the underlying system. With GPT's texts increasingly passing human expert judges and plagiarism detectors alike, our ability to discern between human and machine generated texts is closing, even as we remain uncertain and concerned about GAI’s impacts on equity and inclusion in how we make and circulate meaning through texts.
In the Proceedings of the Digital Humanities Congress (2018), Henrickson observed that more research is needed to better understand how ordinary people ascribe authorship to computer-generated texts. Focusing on 500 participant responses, she conducted a “systematic analysis of computer-generated text reception using 'Natural Language Generating' (NLG) technologies.” Henrickson found the concept of an “author” conformed with “conventional understanding of authorship wherein the author is regarded as an individual creative genius motivated by intention-driven agency” (Conclusion, para. 1). Henrickson further observed participants likening the writing process of NLGs to that of a parent (developer) passing along knowledge to a child (an AI NLG/LLM system like ChatGPT), noting that this parent-child narrative humanizes and thereby normalizes the technology as an authority. Henrickson found that users automatically responded to this technology as they would to other people, concluding that most “readers feel that the system is capable of creating sufficiently original textual content” and “the process of assembling words, regardless of developer influence, is in itself enough for the system to attribute authorship” (The NLG System as Author, para 3). If, like Henrickson observed, our students believe GAI can author texts, what entry points do they have for claiming agency over the texts it generates? Does its credibility and authority extend to every possible writing topic and task? With these questions and more, is it any wonder our students are confused about how to move forward, not to mention the weight and variety of institutional and instructor GAI policies and the threat of AI detection software. In this climate, the students “may experience an increased sense of alienation and mistrust” and “increased linguistic injustice because LLMs promote an uncritical normative reproduction of standardized English usage that aligns with dominant racial and economic power structures” (MLA-CCCC Task Force on Writing and AI, July 2023, p. 7). As these concerns point out, students will experience uneven dangers and benefits from GAI applications like ChatGPT depending on the cultural and social power differentials.
In the vacuum created by a lack of effective GAI regulations, teaching critical AI literacies may give students greater agency in writing under AI's perceived brilliance. GAI’s grip on the public has been achieved partly through rhetorical maneuvers that have shaped public attitudes of acceptance. ChatGPT's interface and many public conversations frame GAI/ChatGPT as a super-human power made in the shape of an ideal friend.
Corinne Jones (2021), connecting with Selfe and Selfe (1994) and Stanfill (2015) found that “interfaces play an important role in circulation as a world-making process because they create normative circulatory (1) practices, (2) content, and (3) positions. They perpetuate power and they produce norms for who can circulate what information and how they circulate it” (p. 2). The next sections provide suggestions for operationalizing writing studies research on power and interface to create heuristics for building critical AI literacies in writing classrooms and other communities.