Rhetorical Information Theory

Patrick Love

Rhetorical DIKW Pedagogy with GenAI

The language of (Rhetorical) DIKW translates the core value proposition of GenAI/ChatGPT (used interchangeably hereafter) writing: ChatGPT remixes data (stored writing) to produce information (new/different writing) on-demand. Without lived experience of the world, ChatGPT needs a user/human to 'make' knowledge with—someone who provides prompts and socially audits the output, meaning ChatGPT cannot autonomously stray upward past information or downward past data. From the viewpoint of the user, ChatGPT may regularly produce novel information, but since ChatGPT only deals with data and information, that information is more likely (like a used car) new-to-you. Overlayed on the DIKW pyramid, the user decides if what ChatGPT produces is knowledge, a pattern aligned with their own experience that could facilitate action/wisdombefore it can 'be' knowledge. The fluidity with which humans move through DIKW levels is a testament to how ingrained they are in our nature, and make it tempting to ascribe them to non-human entities (i.e. to anthropomorphize) (Leaver & Srdarov, 2023). The (sometimes fleeting) moment where a user has the opportunity to agree with ChatGPT or not and why, is the pedagogical moment the remainder of this chapter focuses on. The tradition of institutional knowledge-making that DIKW translates for machines models this decision-to-agree interaction (i.e. information-to-knowledge) as happening between people: the Enlightenment found use for rhetoric in communication conveying arguments about the world to convince people to adopt them, for example (Bacon, 1605). With ChatGPT implicitly positioned as the user's multifaceted partner (librarian, tutor, secretary, copyeditor), ChatGPT prospectively offers the possibility to agree with someone on-demand and at-scale, particularly when human labor in these areas is underfunded or eliminated.

An overarching way to view ChatGPT’s impact, informed by Rhetorical DIKW, is how it manipulates time for the user. Marche (2022) implies that ChatGPT solves the problem of “write a paper” for students by having incomprehensible amounts of data at its disposal to remix, hence this chapter argues that Marche positions writing as a past- and present-oriented affair, converting data into information; data abstracts the world to preserve things outside the moment of collection, meaning data represents its spatiotemporal moment. It may seem pedantic to claim all data is of the past, but there's a practicality to it that cannot be ignored, particularly when collected data is modeled to predict the future based on past performance.2 Therefore, when ChatGPT take a user prompt, picks data, and remixes it into an informative response, ChatGPT fundamentally applies abstractions of the past to the present concern to help someone with their future. Granted, ChatGPT will comply with requests to predict the future, but it, too, uses past data to predict future performance. Therefore, when ChatGPT takes a user prompt, picks data, and remixes it into an informative response, ChatGPT fundamentally applies abstractions of the past to the present concern to help someone with their future. Granted, ChatGPT will comply with requests to predict the future, but it, too, uses past data (training data presumed relevant to the question) to predict future performance. No matter what kind of information ChatGPT makes for the user, both ChatGPT and the user will be accepting that the past contains an answer, as the myth of transience dictates. ChatGPT, as an information technology, signals a truth about knowledge: all new knowledge is personal until it's not, which is is to say that knowledge-making is a process of social acceptance. This makes ChatGPT an interesting and potentially highly-valuable tool to accelerate social acceptance of information as knowledge because it produces information (again) on-demand and at-scale as an uncanny conversation partner with a technocratic, expedient ethos (Katz, 1992).

The ultimate issue with this view of time, the world, and knowledge is that it tends to presume 1) that history is (or can be) complete and commonly understood and 2) the past is "right," and along with it the history of Eurocentric imperialism and colonialism, inequality, exploitation, and ecological destruction that produced it. There is no running from the past, but we must consider how remixing it as a de facto starting point will help us break from those traditions and, in fact, overcome them, with or without GenAI. As argued above, attention to the future we wish to inhabit better promotes our role in producing it, contra the myth of transience. ChatGPT's meditation on past and present cannot promote our role alone; we must assert our role. Hence, "skilled" use of GenAI (whether in school, work, or other pursuits) will likely be influenced by command of one's own lived experience, along with data, information, and knowledge that informs one's understanding of ecological conditions to adequately interrogate and mold what ChatGPT produces. As Star and Strauss note, new technology displaces work rather than reducing it, and the user takes on new tasks (1999, p. 20).

In that spirit, this chapter ends with analysis of using ChatGPT in drafting and research as these (along with revision) are use cases students will likely try and workplaces will likely expect: drafting shorter work or parts of larger work, assembling information for easier consumption, and revising existing writing for readability or adjusting audiences. The chapter will make use of DIKW language frequently to further illustrate the metalanguage in action. These use-cases are, barring regulation or labor agreements like the Writers' Guild of America's with movie producers, some of the likely new work we will do (Star and Strauss. 1999). Ultimately, because ChatGPT displaces liability for itself onto users, users at all levels have more responsibility for the writing they produce with ChatGPT because we are ChatGPT's manager, not its students, regardless of the feeling of wonder and discovery the product (and its media advocates) hopes to engender.


1O'Neil's work on data-driven policing perpetuating asymmetrical policing policing of non-white and low-socioeconomic neighborhoods, and Noble's work on search engines perpetuating discrimination by shaping available information demonstrate this (O'Neil, 2016; Noble, 2018).