Interfacing Chat GPT
Desiree Dighton
ChatGPT Isn't a Writing Technology
This isn't the end. Generative AI interfaces like ChatGPT's could be re-designed to incorporate more user-based options and greater transparency in its data sourcing and processes. It's not difficult to imagine users being able to check a box to command ChatGPT to derive its data only from certain kinds of sources, peer-reviewed scholarly sources or art museum websites across the world, similar to how library databases and other data storage and retrieval systems currently operate but with enhanced, larger-scale pattern-finding search capabilities. Indeed, The Atlantic reported OpenAI is involved in talks with publishers like them to offer product designs with more user options and transparency over data source material (Wong, 2024). Dimakis, co-director of the National Science Foundation's Institute for Machine Learning told The Atlantic that “it is ‘certainly possible’ that reliable responses with citations could be engineered ‘soon’” (para. 15). By the time this chapter is published, these and more options may already be available in subsequently released versions of ChatGPT. Or, in the absence of regulatory and compliance pressure, generative AI (GAI) may be allowed to rampantly ingest information, art, science, propaganda, social media chatter, literature, crime reports, ancient texts, academic research—the entire history of human culture and information, at least as it exists on the internet—into personalized generic versions untraceable to originating sources and human labor. Are we seizing the moments we have to develop critical literacies durable enough to withstand what our GAI futures might hold?
When I asked GPT to provide me with the full text or a passage from an ancient, non-copyrighted text, the Bhagavat Gita, GPT returned a response that was not accurate in content or link. When I corrected it and asked again, ChatGPT confessed the link didn’t match its response text because it had provided a generic version. “A ‘generic rendition,’ according to GPT, “refers to a representation or interpretation of a text that doesn't adhere to a specific published translation or version but instead provides an overview or essence based on a variety of sources” (ChatGPT, October 15, 2023). ChatGPT’s use of “generic” loses its rootedness to “genre” just as it attempts to shed any authorial and legal traces to the source material. In ChatGPT’s interface, will all knowledge and information become “generic”—having no distinctive qualities or applications?
GAI chat-based interfaces disguise their infrastructure and often earn our trust and use through affective flirtations and maneuvers. Like a friend, GAI wants us to trust it with our vulnerabilities and tolerate its shortcomings and flaws. Its powerful friendship comes with heavy risks best overlooked. In “Data Worlds,” the introduction to Critical AI, Katherine Bode and Lauren M.E. Goodlad (2023) wrote that "the power of AI" provides an ideal "marketing hook" and a distraction from corporate concentration of power and resources—including data. The focus on "AI" thus encourages an unwitting public to allow a "handful of tech firms" to define research agendas and "enclose knowledge" about data-driven systems "behind corporate secrecy” (para. 8). Once GAI’s technologies are hidden behind a plethora of institutional, corporate, nonprofit, and social media products and interfaces, will we be even more unable to see its materiality and the consequences of interacting with it? If we are to pull our classrooms through this paradigm shift and help our students continue to learn and experience the pleasures and challenges of writing and learning, it will not be because we have learned to prompt GAI like ChatGPT more effectively. It will be because we normed our classrooms and our technologies to the transparent and ethical values of writing studies.