Interfacing Chat GPT

Desiree Dighton

Your Super-Smart Friend, ChatGPT

One of GPT’s most prized features is its ability to efficiently simplify complex information for non-expert users. Since most of us, including our students, have trouble grasping and explaining its processes, I asked GPT to explain how it works to someone with only basic computer knowledge. GPT-4 stated, “Suppose your friend spent years reading from millions of books, articles, and other texts. They didn't memorize everything, but they learned patterns, like how sentences are structured, what kind of answers people give to certain questions, and lots of information about the world” (ChatGPT, October 19, 2023). Now, it doesn’t have to find or locate the answer in any specific texts: “Instead, they think about all the patterns they've learned and try to generate a response that makes the most sense based on what they know. It's like they're completing a sentence, but the sentence is your answer” (ChatGPT, October 19, 2023).

It’s worth repeating: ChatGPT wants to complete your sentences for you. It values the ability to remake knowledge and “personalize” it behind user interface windows that keep its processes mostly invisible. The interface limits user input to what can be accomplished through human conversation—a design that guarantees ease of use without interface features or backend access points to control the processes and data involved in that use. Added to this unequal equation is that when we establish accounts and ask GAI applications like ChatGPT questions, often prompting with more specifics to get better responses, our interactions provide ChatGPT with broad access to us, access that becomes their data, the value of which, at scale, is more compelling to ponder than its personalized responses.

GAI web-based applications like ChatGPT often include data protection policies and ways for users to opt out of some levels of data collection and brokerage. Open AI’s privacy policy states that it collects a wide range of “Personal Data,” providing only a conditional option for users to protect the use, not the collection, of their “Content to train our models.” To do so, users must locate their settings, click data controls, and click off the default, labeled, “improve the model for everyone.” This option, according to its policies, protects user inputted content from being used to train its models, while not allowing users to opt out of further data collection or privacy, stating ChatGPT’s parent company may “share your personal information with affiliates, vendors, service providers, and law enforcement” (O'Flaherty, 2024). While we may have some federal and state data privacy protections, Satory et al. (2014) found “significant gaps in the effectiveness and enforcement of data privacy laws, cybersecurity regulations, and AI accountability” (p. 657). Furthermore, the research team noted that “the accountability of AI systems remains a critical concern, with current regulations falling short in addressing issues of transparency, bias, and ethical decision-making” (p. 666). Despite its potential dangers and harms, GAI interfaces position users to accept its terms and “mistakes” to gain access to the power of artificial intelligence.

ChatGPT’s interface tells users to accept its dangers as we would ordinary human mistakes while declaring it’s technical neutrality, effectively saying to the user, "I am a friend that doesn't have feelings, emotions, or consciousness. I'm just a tool, like a very advanced calculator for language." Appeals to human qualities and emotions like friendship and trust are some of ChatGPT's winningest affective flirtations. They nudge us where we are soft, using our human emotions and insecurities, to accept GPT as we would a human friend. These affective flirtations work against our resistance to the application, softening our defenses and norming us to GPT's method of collaboration.

We can accept UX/UI heuristic evaluations of GPT's interface, its flirtatious and humanizing appeals for our friendship, knowing that acceptance demands tolerance for potential serious errors and harms. Alternatively, we can develop interface heuristics that forge critical AI literacies—perhaps the few protections we have to design and implement, at least in our classrooms. As Agboka (2012) noted, however, heuristics too often reflect principles of “Big Culture” ideologies like those embedded in Nielson’s heuristic tools. Instead, writing studies should create dynamic, flexible heuristic frameworks that are adaptable to cultural differences and technical changes in our classrooms and other communities. Structured yet flexible GAI heuristic evaluation can help students discern ChatGPT’s benefits and limitations as “personalized” writing tools, opening more opportunities for them to claim agency in its use.

More broadly, heuristics based on writing studies theories and pedagogies promote more nuanced and culturally aware evaluation of interface design beyond our classrooms. As writing studies scholars have pointed out, the influence of technologies is not confined to particular interactions—an interface functions as a circulatory, "world-making process" (Jones, 2021). As Grabill (2003) pointed out, “to ignore infrastructure, then, is to miss key moments when its meaning and value become stabilized (if even for a moment), and therefore to miss moments when possibilities and identities are established” (p. 464).