Interfacing Chat GPT

Desiree Dighton

Classroom Heuristic Development Methodology

In the class, we engaged with the GPT-3.5 interface via https://chat.openai.com/. At the time of this writing, the ChatGPT interface contained two GPT options for users to select. GPT-4 was the paid tier I subscribed to and engaged with along with the free GPT-3.5 version students engaged with in class. I provided students with a survey before and after they engaged with ChatGPT on their own computers through small group, semi-structured activities over two class periods. In writing up the analysis, I've integrated my discussion with aggregate/paraphrased student responses. In terms of demographics, students were traditional undergraduates, primarily from small town, rural, and coastal Eastern North Carolina with a few students from more urban Southeast or Northeast states and one international student from Mexico. Most students identified as female, although there were male and non-binary students. Most students identified as white with about 1/3 of the classroom identifying as Black, Latinx, or by one or more non-white ethnicities.

In the survey, I asked students to assess themselves as writers, describe characteristics of ‘good writing,’ and discuss their understanding and impressions of ChatGPT. Just over half of students considered themselves good writers (57.1%) with slightly more students claiming to enjoy writing (61.9%). When asked to use three words to describe their writing to someone else, many students used descriptors related to simplicity, efficiency, safety, logic, organization, and clarity. Answers to similar questions related to writing strengths echoed the ‘professionalism’ Selfe and Selfe observed as white, middle-class values and demonstrated that the majority of these college writers use those values to identify standards of good writing. Of the more individualized descriptors students used to associate with writing, students reported “creative,” “enlightening,” “appealing,” “insightful,” “well-spoken,” “feminine,” “dark,” “expansive,” “bright,” “fun,” and “unique.” These descriptors were more likely to appear uniquely in individual responses while the professionalism values were more prevalent throughout larger groups of student responses to several questions about ‘good’ writing. When asked to identify their greatest strengths as writers from multiple choice options I provided, students chose creativity (13), organization (13) and originality (12). On the whole, the majority believed using correct grammar/spelling (11), incorporating and citing research (11), and doing a final proofread (10) were their greatest weaknesses. When asked about aspects of their writing process they engaged in for school or in workplace, most students said they brainstormed (12), performed preliminary research (15), drafted (13), and sought out feedback from a professor, tutor, or friend/family member (16). Perhaps especially relevant to students' vulnerability to GAI’s dangers, far fewer students claimed to engage in revising (7) and incorporating and citing research (4) as part of their usual writing process for school or work.

In the class I surveyed, 81% of students stated they were aware of ChatGPT and 19% of students stated they’d never heard of it. When asked to associate three words with ChatGPT, responses expressed polemical attitudes: “helpful” was used as many times as “cheating” with terms like “robotic,” “answers,” “easy,” “awesome,” and “evil” swirling together in a complicated morass of feelings and impressions. When asked to explain what ChatGPT does, most students stated some version of “it gives you answers,” “it knows everything,” “it helps you with assignments,” or “it writes your paper for you.” Several stated openly that they didn’t know what it did (4). When asked to describe how it generated responses, students either indicated they didn’t know or that it worked “through AI” with a few students mentioning more granular attributes like “software,” “metadata and analysis,” “uses an algorithm” and “has a lot of data it has learned from the internet.” When asked to identify the feelings they have about ChatGPT, students most frequently chose “excited” (10), “intimidated” (9), “distrustful” (9), and “optimistic” (8). Over half of the students surveyed reported they’d used ChatGPT and been satisfied with the results. When asked what had satisfied them, students commented on ChatGPT’s ability to provide useful outlines, to help them learn things like cooking and shaving, and to answer all the questions they might have about school, work, and life.

Summarizing a small pool of individual student impressions has its limits, but it was clear that students were most broadly satisfied by ChatGPT's speed and efficiency in providing personalized responses to them. I witnessed the dazzled expressions of a few ChatGPT newbies the first time they received nearly instantaneous, authoritative responses typed magically on their screens without their own fingers hitting any keys. When I asked students who hadn't used ChatGPT their primary reason why, most chose, “I'm concerned about the consequences” (4) with some students not knowing how to try it (2) or being concerned specifically about teachers/school frowning upon it (2). Only 23.8% (5 of 24) of students admitted to using ChatGPT to write something they used for school or work with those students split on feeling positively or negatively about the experience. Half of students said they’d like professors to teach them how to use ChatGPT (52.4%) with 28.6% opposed to ChatGPT instruction from their teachers and ~20% ambivalent or conflicted about classroom use and instruction. When asked if ChatGPT could write better than most people, 57% of students thought it couldn’t write better than most people but 52.4% thought it could write better than they could. When asked about ethical concerns, 76% of students said being concerned about cheating or misrepresenting intellectual property. A few students provided darker responses about AI/tech being scary and/or dangerous to humanity. A few responses evoked concern over risks to their learning and intellectual/creative growth. When asked if using ChatGPT had no negative consequences, would they prefer to use it or a similar technology to do most or all their writing, 61.9% said no, while 38.1% answered yes. Students who indicated they wouldn’t use ChatGPT regularly explained that they valued their own writing pleasure, originality, creativity, learning, and humanity over ChatGPT’s functions. The smaller group who said they would use ChatGPT for most writing tasks remarked on its efficiency, ease of use, clarity, or their own perceived flaws, like “laziness” or lack of understanding, as well as the societal pressures of seizing an innovative technology for personal gain. Finally and pertinently, most students believed in the next five years knowing how to work with generative AI like ChatGPT would be necessary to success in college (76.2%) and in most professional environments that require writing (81%). Although the sample for this survey is small, it gave me a valuable glimpse into student attitudes, beliefs, and values around writing amid a technological paradigm shift in writing practices and writing studies research. This glimpse made students’ vulnerability to this particular technological instantiation acutely visible to me. ChatGPT seems to have arrived just for them—it answers all their questions without judgement and provides “perfect answers” that may help them forge successful personal and professional paths forward in their lives. Perhaps most sobering, no matter their personal view on tech like ChatGPT, they believed they would ultimately be compelled to use it in school or work contexts.

Prior to surveying students, we refrained from discussing ChatGPT, and I set aside two class period as ChatGPT workshops. I explained the research scenario for this book chapter, telling them that I’d like their participation and feedback, but that feedback would be anonymous, and they could opt out of any aspect at any time. The first survey elicited 21 of 24 student responses. According to my observations, most to all of students participated in the in-class activities around ChatGPT. Due to time constraints in our semester, I didn’t administer the second survey in class, and only 8 of 24 students provided responses. If given another opportunity, I’d provide another week to complete the sequence of survey, activities, and discussion to provide more in-class time for heuristic development and analysis.

Heuristic Discussion

As part our in-class activities, I asked students to watch OpenAI’s March 15, 2023, promotional video. This 3-minute video is a masterclass of rhetorical maneuvers (Figure 6). Towards the end, a woman says, “We think that GPT-4 will be the world's first experience with a highly capable and advanced AI system. So, we really care about this model being useful to everyone, not just the early adopters or people very close to technology.”

Still from OpenAI video promoting ChatGPT 4 showing a bar chart indicating that the new version has a much higher word limit in its output than ChatGPT 3.5
Figure 4. Still from OpenAI video promoting ChatGPT 4

In this moment, OpenAI explicitly hails the public—all of us are the ideal user for GPT—no matter our technical knowledge or skills. Further attempting to establish an ethic of care about the technology’s development, its use, and its effect on our lives, the woman says, “So it is really important to us that as many people as possible participate so that we can learn more about how it can be helpful to everyone.” With this, OpenAI appeals to our commonplace cultural values of being helpful and contributing to the public good.

When students were asked about their impressions of the promotional video, responses were mostly positive. Students felt as though the video provided more information about ChatGPT’s capabilities, its improvements, and, perhaps most notably, its ability to learn from them while also serving as their personal tutor. Regarding interface design, most students indicated they liked its simplicity and ease of use, the speed of responses, the absence of additional links to click for information, the form of interaction (They especially liked that the interface feels and looks like texting someone), the visual aspects of the streaming response, and the chat archive option. When asked what they disliked about the interface, nearly half of students said there wasn’t anything about the interface they didn’t like (3), while others stated it was too basic/bland/simple (4), or its responses took too long to generate (1). When asked what they would change about GPT’s interface design or functionality to better reflect their preferences and desires, most of the responding students (5) stated they would not change any aspects of its interface design or functionality. The other half responded with ideas to change the re-generate feature and memory for better storytelling (1), an option for more up-to-date information (1), a function to determine trustworthiness of information (1), and, lastly, “a design that will appeal to the outside audience” (1). This last response is one that I’d like to linger on before concluding.

I still wonder what this student meant by “outside audience.” As diverse students participated in these classroom activities, I wondered how each one oriented themselves to the bodies, identities, and values presented in OpenAI’s initial ChatGPT promo video. This individual student’s response—a design that will appeal to the outside audience—evoked, perhaps, a more nuanced perspective on ChatGPT’s interface design than other classmates who easily accepted the interface design without critique or awareness that it could or should look or function any differently. Perspectives like this one could be the basis for new dynamic heuristic pedagogies and social interactions that nurture scalable critical technological awareness, or what we might even call critical AI literacies—the competence, and perhaps even the agency to resist—the norming potential of technologies we access through interfaces embedded ‘big culture’ ideological values. In our week-long class activity, we didn’t get nearly close enough to solidifying critical AI literacies. Given these student responses, it was clear to me, however, how vulnerable students are to GAI’s ease of use and efficient content generation, regardless of perceived ethical issues or potential harms.

Developing critical AI literacies in our classrooms may counteract some of the cultural power and real-life consequences of GAI, especially important to open-access peer-reviewed research like this collection as it also becomes untraceable data harnessed by LLMs and transformed into GAI's responses. If students are taught to deepen their interface analysis skills and consider dominant narratives as powerful rhetoric rather than simple truths, they develop the critical distance necessary to question GPT's results—a responsibility even OpenAI says it wants from users. We can break the mesmerizing spell of GAI's brilliance, in part, by focusing on its materiality and the rhetoric circulating in its ambient wake. The materiality of its interface is the seat of its power, and it's our attention to interface's materiality and circulation that we can evaluate GPT's values, not least of which are its desire to situate us as passive consumers and producers in its data economies. GPT’s interface tightly controls user agency through acting as only an access point to a "generic" response without much, if any, user participation or transparency in how or what originating materials shaped that personalized response. These and other values allow GPT to remake our unique productions of culture and information into de-contextualized data feeding its own natural language responses. This remaking of the concrete and particular into the abstract and general creates an illusion of a new piece of "writing" that can be used by individuals, accruing massive subscription and licensing profits for OpenAI and its corporate partners. ChatGPT's interface and circulation have been designed with overt, persistent affective flirtations—it's pleas for friendship, its hedges, the circulatory myths and romance of AI’s innovative power. These affective flirtations are attempts to disguise its coercive directives, and its chat-based interface design limits user agency while norming us to values and practices that feed data economies and profit tech companies.