Interfacing Chat GPT

Desiree Dighton

Your Super-Smart Friend, ChatGPT

ChatGPT's strength is often touted as its ability to efficiently simplify complex information. Since most of us, including our students, have trouble grasping and explaining its processes, I asked GPT to explain how it works to someone with only basic computer knowledge. GPT-4 stated, “Suppose your friend spent years reading from millions of books, articles, and other texts. They didn't memorize everything, but they learned patterns, like how sentences are structured, what kind of answers people give to certain questions, and lots of information about the world” (ChatGPT, October 19, 2023). ChatGPT-4 said it has a “brain-like structure,” with digital versions of neurons called neural networks. “After reading all those texts, your friend [GPT-4] practiced guessing what word comes next in a sentence” so many times until it “got really good at understanding language and predicting what should come next.” Now, it doesn’t have to find or locate the answer in any specific texts: “Instead, they [GPTs] think about all the patterns they've learned and try to generate a response that makes the most sense based on what they know. It's like they're completing a sentence, but the sentence is your answer” (ChatGPT, October 19, 2023).

ChatGPT wants to complete your sentences for you. It values the ability to remake knowledge in its own interface and attempts to make knowledge production invisible and shaped by machine pattern finding and regurgitation. This kind of knowledge production doesn't require humans to perform research, read, and consider the sources of their information, or come up with any new ideas to extend or complicate other ideas, it only requires that we have a chat with GPT. As we type our questions, we’re giving ChatGPT's LLM and company more data, arguably more valuable than the responses it generates for us. OpenAI/ChatGPT includes options for users to opt out of allowing their queries to be used for training purposes (To opt out, your need to dig deep into user settings, data controls, and click the slider off of the default, “improve the model for everyone”—a snide critique should you choose privacy). OpenAI’s privacy policy does not state that it will protect your Personal data or allow any opting out of that data collection and retention if opting out of training other more vaguely relevant but suspiciously icky ways such as “to share your personal information with affiliates, vendors, service providers, and law enforcement” (O'Flaherty, 2024). A recent study focused on data privacy laws and AI found that “significant gaps in the effectiveness and enforcement of data privacy laws, cybersecurity regulations, and AI accountability” (Satory et al., p. 657). The researchers concluded that even after GAI public proliferation and widespread use “the accountability of AI systems remains a critical concern, with current regulations falling short in addressing issues of transparency, bias, and ethical decision-making” (p. 666). Despite these dangers and more, GAI interfaces and their parent companies want us to accept that, “Just like any friend, ChatGPT-4 isn't perfect” (ChatGPT, October 19, 2023). In this response snippet, GPT framed itself as a flawed human, and mistakes, while not a good thing, are generally endearingly human.

Further into this chat with GPT, it pivoted from its appeals to view it as humanly fallible, to forget all that, and asked me to remember it's just a neutral technology that can’t be maligned like a human: “This friend doesn't have feelings, emotions, or consciousness. They're just a tool, like a very advanced calculator for language.” Appeals to human qualities and emotions like friendship and trust are what I call ChatGPT's "affective flirtations." Affective flirtations nudge us to accept ChatGPT as a friend, an anthropomorphized technology that has our back by its ever present, polite, personalized responses. Affective flirtations disguise its more slippery values and behaviors under our awareness, softening our defenses and “norming us” to GPT's version of “information” search and retrieval.

We can accept outstanding UX/UI heuristic evaluations of GPT's interface and its own flirtatious and humanized explanations of its processes. We can accept its appeals for our friendship, which entails tolerance for its errors and shortcomings, and vague-to-potentially predatory data privacy and security policies. Or we can develop our own writing studies interface heuristics and forge one of many necessary paths forward in developing critical AI literacies. In our classrooms, students can learn more critical methods of contextualized heuristic evaluation while also building critical AI literacies. As Agboka and others in rhetoric and TPC have noted, heuristics should reflect not only reflect principles of “Big Culture” ideologies like those embedded in Nielson’s heuristic tools, but instead create frameworks that are “dynamic” and “pluralistic” enough to include the social, political, learning/writing environments and relations of students and professional users of technologies as they change and intersect. Agboka (2012) stated “another drawback of the heuristic approach is that it will only too easily work to the advantage of powerful interests, because from a marketing perspective, for example, a single, global/monolithic culture would be Utopia for corporate interests and dystopia for everyone else” (p. 169). Whether we call they flexible and dynamic heuristics or frameworks, structured yet flexible GAI activities that include the materiality of its interface along with the promotional and everyday conversations that circulate and shape its acceptance can help students exercise and better understand the limits of their agency with GPT. It can also promote more nuanced and culturally aware evaluation of interface design. As writing studies scholars have pointed out, the influence of technologies is not confined to particular interactions—interfaces function as a “circulatory, world-making process” (Jones, 2021). As Grabill (2004) pointed out, “to ignore infrastructure, then, is to miss key moments when its meaning and value become stabilized (if even for a moment), and therefore to miss moments when possibilities and identities are established” (p. 464).