Interfacing Chat GPT

Desiree Dighton

ChatGPT Isn't a Writing Technology

This isn't the end. ChatGPT's interface could be re-designed to incorporate more user-based controls along with its conversational NLP technologies. It's not difficult to imagine users being able to check a box to command ChatGPT to derive its data only from certain kinds of sources, peer-reviewed scholarly sources or art museum websites across the world, similar to how library databases and other data storage and retrieval systems currently operate but with enhanced, larger-scale pattern-finding search capabilities. Recently, The Atlantic reported OpenAI is involved in talks with publishers like them to offer additional product designs that would do just that (Wong, 2024). Although initial testing reported in The Atlantic continues to reveal the persistence of problematic inaccuracies in ChatGPT's responses reporting citations and other source data, which, supposedly, had to be separated from its originating source to become the compressed and stored data used by the NLP/NLG technologies. OpenAI's technologies didn't have to disguise the sources of data—they’ve been designed to denature information from its origins, and despite significant technical and legal challenges, Dimakis, co-director of the National Science Foundation's Institute for Machine Learning told The Atlantic that “it is ‘certainly possible’ that reliable responses with citations could be engineered ‘soon’” (para. 15). Without that engineering and redesigned option for users, ChatGPT remakes information, art, science, propaganda, social media chatter, literature, crime reports, ancient texts, academic research—the entire history of human culture and information, at least as it exists on the internet. The interface provides our limited access to the veracity, legality, and original source information that shaped that content response. GAI clearly values data and human language along with efficiency and speed without valuing user agency and transparency and reciprocity to the humans and other entities that created its data.

When I asked GPT to provide me with the full text or a passage from an ancient, non-copyrighted text, the Bhagavat Gita, GPT returned a response that was not accurate in content or link. When I corrected it and asked again, ChatGPT confessed the link didn’t match its response text because it had provided a generic version. “A ‘generic rendition,’ according to GPT, “refers to a representation or interpretation of a text that doesn't adhere to a specific published translation or version but instead provides an overview or essence based on a variety of sources” (ChatGPT, October 15, 2023). This is slippery use of “generic” with the word’s original roots in “genre,” but all things become new again when they’re wiped of their histories. Miriam Webster defines “generic” as “relating to or characteristic of a whole group or class; not being or having a particular brand name; having no particularly distinctive quality or application.”

Cereals can be generic—Cheerios, for instance, are only slightly different from a store brand "Breakfast O." If I get sick from the generic version, I know who is responsible, and if it turns out I love generic Breakfast Os from Harris Teeter better than Cheerios, I know where to find them again. Will every book become “generic” in ChatGPT’s interface—having no particular distinctive quality or application? Generic books can’t be named, can’t be attributed to a responsible creator, and can’t be located outside of GPT’s interface.

GPT disguises its infrastructure and appeals to us with its warm welcome and friend requests. We trust our friends, we reveal our vulnerabilities, and we tolerate friends’ shortcomings and flaws. ChatGPT is a technology that wants us to befriend it but only through the loss of origination knowledge and agency. In “Data Worlds,” the introduction to Critical AI, Katherine Bode and Lauren M.E. Goodlad (2023) situated ChatGPT and other generative AI built on LLMs within the technological and cultural history of data capitalism. Bode and Goodland wrote that "the power of AI" provides an ideal "marketing hook" and a distraction from corporate concentration of power and resources—including data. The focus on "AI" thus encourages an unwitting public to allow a "handful of tech firms" to define research agendas and "enclose knowledge" about data-driven systems "behind corporate secrecy” (para. 8). While this observation should cause enough alarm—we won’t know what the master’s tools are called, let alone be able to dismantle a generic version of the master’s house—more alarming still is generative AI’s incremental creep into our consciousness (Should I Google it…or ChatGPT it?). OpenAI’s biggest corporate partnership is with Microsoft, and Microsoft’s applications are now powered by OpenAI’s models. Other companies either have their own AI/LLMs, are rushing to create their own, or they’re signing up to pay for an existing model. Once GAI’s technologies are hidden behind a plethora of institutional, corporate, nonprofit, and social media interfaces, will we even notice our interactions with it? Will we be able to distinguish between a standard keyword search and result that locates actual discrete sources or generic, inaccurate AI versions? Will we care about the difference? If we are to pull ourselves through this technological transformation and paradigm shift, it will not be because we learned how to collaborate more effectively with Generative AI like ChatGPT. It will be because we normed our classrooms and our technologies to the transparent and ethical values of writing studies.