AI-generated triptych: On the left, a robot made of wires with a femail human face; in the center, an abstract representation of a dark-colored hand stretched across a beige background. On the right is a blurry infographic-style image with a beige background and dark brown text athat features a green image of a globe surrounded by text descriptions connected to it by lines.

What We already Know: Generative AI and the History of Writing With and Through Digital Technologies

Anthony T. Atkins University of North Carolina Wilmington
Colleen A. Reilly, University of North Carolina Wilmington

Introduction

As Johnson (2023) recently reminded writing faculty, computers and writing/digital rhetoric scholarship, pedagogy, and experience have prepared us for teaching with developing technologies—AI presents a new challenge but not one that should cause us panic. After all, as Johnson and Agbozo (2022) emphasize, faculty "have been using technology to teach, critique, and remediate the writing process for more than one hundred years" (p. 158). We have a rich scholarly tradition that demonstrates basic principles, including that all technologies are political and rhetorical and that "policing is not pedagogy" (Johnson, 2023, p. 172). In 1999, at another pivotal moment in the history of teaching with web and digital composing technologies, Selfe admonished all writing faculty to pay attention to and interrogate the use and construction of technologies because failing to do so allows the social inequities they exacerbate to go unchallenged. Many scholars embraced that perspective and examined writing and communication technologies in terms of their construction, labor implications, and ethical ramifications. Based in this extensive scholarly tradition, our chapter highlights some of the strategies scholars have already developed to work with digital technologies, however new and powerful, to enhance our writing pedagogies and prepare our students for all contexts. We argue that we have access to the most qualified scholars to address the challenges of AI because they recognize solutions to conundrums presented by previously developed digital platforms and tools.

To this end, our chapter focuses on three central themes present in the literature that we saw as most generative when grappling with AI. These themes include the challenges posed to our conceptions of writing and authorship; the access and accessibility implications of information and communication technologies; and the degree to which technologies reveal and mask their mediation of content. Our chapter draws upon only a small amount of the wealth of scholarship from the corpus related to computers and writing and digital rhetoric; we selected texts that most resonated with our focus, knowing that full coverage was not feasible. Additionally, our chapter addresses only the text generation type of AI, specifically chatbots, while acknowledging that many other types exist that create visual, audio, and multimedia compositions. Our discussion is enhanced by short video responses that we integrated into our chapter that provide four scholars' responses to this question: What are the central lessons that our field of computers and writing learned in working with past digital technologies that can best prepare us to assist our faculty and students when composing with generative AI? As these videos illustrate, our field has a wealth of relevant knowledge to draw upon and this knowledge continues to grow and inform how we as scholars and faculty navigate researching, writing, and teaching with AI.

Laura Dumin, Professor and Director of Technical Writing, University of Central Oklahoma, focuses on how we might address faculty and students' concerns around AI and emphasize the formative and process-oriented aspects of writing instruction to provide a supportive context in which faculty and students can learn to work with AI.

Video Transcript

Laura: Hey there. I've been asked to respond to the question "What are the central lessons that our field of computers and writing learned in working with past digital technologies that can best prepare us to assist our faculty and students when composing with AI?"
Laura: As I think about what we've learned in the past, we've found that with new technologies sometimes people are really excited about them. We also have people who are in the middle and welcome to take them on once someone else shows them something. And then we've got people who wait until the last minute because they're not sure of it. They feel like things are going to change so much for them. They just don't know how to use it, that sort of thing.
Laura: I think that these lessons of adaptation are important because it helps us to understand where people's fear might be coming from or their reluctance or their discomfort. And addressing those concerns is one of the first things that we need to do in order to help both faculty and students feel comfortable with this new technology that AI is. It has the opportunity and the ability to really revolutionize some of the things that we're doing with education.
Laura: So the idea of moving from a product to the process of getting to the product: how do students figure out what their final thing, whatever that thing is going to be, is going to look like? How do faculty re-envision what their assignments might look like in order to get students to think through that process more clearly?
Laura: We can also think about what things like transformative learning help us to understand about the learning process. So what becomes important to students in the journey of learning and helping students to understand why the journey of learning is important, why knowledge still matters in the age of AI, why learning still matters in the age of AI. And I think as we put all of these things together, then we can start to see a pathway forward where we rethink what we've been doing in the classroom. We help students to understand this rethinking.
Laura: Perhaps we move a little bit more away from grades of absolutes. Yes or no? You've gotten this right and start to think more about, okay, you didn't get it right this time, but how can we come back to this again? Or the AI helped you here, let's make sure that you still understand the concepts because you're going to need them in a class further down the road or in your work.
Laura: So yeah, things are going to change a little bit. And I think it's exciting for a lot of people. It's also still scary for a lot of people. And as we realize the emotional toll that these changes can take on people, then we have the opportunity to work with faculty and students to help everybody move into this new technology with maybe a stronger degree of comfort than they might have had if they just had to do it on their own.

Challenges to Writing and Authorship

During the early 21st century, scholars in computers and writing and digital composition theorized and interrogated the relationships between technologically-mediated forms of composition and traditional concepts of writing, texts, and authorship. The research in this vein highlights the challenges to traditional standards of composition and authorship posed by a range of digital content and development processes, including coding, writing markup and metadata, and authoring in multimedia. Teachers and scholars examined and proposed strategies to aid students to reconceive of their roles as writers and authors when working with a myriad of digital compositions and learn to use the technologies necessary to produce them. For example, authors of digital compositions, such as hypertexts, cede meaning to users who chose their own pathways through the linked content, acting as co-authors and emphasizing the inability of authors to control the integrity of their text and make sustained linearly structured arguments (Cripps, 2004; Hocks, 2003; Purdy & Walker, 2012). This past scholarship grappling with a range of challenges related to new composing practices and context proves relevant for composing and teaching with generative AI.

In light of the recent escalation in the development of writing and communication technologies, it is easy to forget that at the turn of the last century, arguing for importance of non-discursive texts and other types of compositions as literate practices that require equal emphasis in writing curricula was a disruptive and even radical move. For example, in a nonlinear print article, Wysocki (2001) explores how visuals assume primacy in meaning-making in digital environments, forcing a reconsideration of what it meant to author content at the start of the 21st century (see also Cripps, 2004; Hocks, 2003). Wysocki asserts that words are “always visual elements," but the argumnents made in hypertexts "cannot even be found primarily in ‘words’” (p. 232). In a similarly disruptive and multimodal text that is at once article and representation of an oral address originally supported by synchronized slides (84 in total), Yancey (2004) questions the nature of writing and the teaching of composition in response to the proliferation of the genres spurred by new information and communication technologies. Her remarks situate the communicative revolution of the early 21st century historically in relation to previous moments of change in literacies such as the serials and newspapers of the 19th century and the evolution of writing instruction in higher education during the 20th century. Yancey (2004) calls for a new composition focused on circulation of texts, genres, media, and content production across domains. This revisioned composition necessitates interactions between media, an integration of the canons of rhetoric including assembly, and processes of mediation and remediation facilitated by and represented in digital technologies that challenge reconsideration of established rhetorical strategies.

Christopher Andrews, Associate Professor, Texas A&M University, Corpus Christi, also highlights the arguments instructors and scholars had in previous eras around bringing computers into classrooms. Andrews emphasizes the importance of critical pragmatism as a response to working with evolving digital technologies.

Video Transcript

Christopher: What are the central questions that our field of computers and writing learned in working with past digital technologies that can help us think about, talk about, teach about, composing with generative AI?
Christopher: I'll start with a quote.
Christopher: "The latest development and writing technology promises or threatens to change literacy practices for better or worse, depending on your point of view." That's the first line from Dennis Barron's Pencils to Pixels chapter in Passions, Pedagogies, and 21st Century Technologies (ed. Hawisher & Selfe, 1999), which is almost 25 years old as I'm sitting here today.
Christopher: You know, 20, 25 years ago, we were fighting in composition and computers and writing like tooth and nail about the idea of computers in the classroom and how awful that was going to be. And what do we do when writing isn't a thing that happens on the page and when people aren't using pen and paper anymore. And can you, does it even count? And all those other conversations and shouting matches on listservs that we were having, like I said, 20 years ago. So I think that that that's something that, you know, continues today.
Christopher: You know, every time there's a new writing technology, a new literacy technology, it restructures how we approach teaching and how we approach what our students are going to do with writing and what it means to teach writing and all that kind of stuff.
Christopher: It raises all sorts of specters of digital fraud and makes us worry about authentication and you know, we're always finding ourselves kind of casting about and, and, and you know, educating students and each other in a world that's changing and unfamiliar and I think that the lesson that the field of computers and writing has, has learned is that, you know, we have to very carefully resist instrumental and determinist hyper pragmatism.
Christopher: Right? The idea that, oh this is just a tool, it's no big deal. It's just a flash in the pan and the next thing will be next.
Christopher: Or, you know, this kind of determinist hyper pragmatism that "oh my God, the tools are going to take over."
Christopher: Both of these things are a little bit true and a little bit false.
Christopher: But the reminder that I see throughout computers in writing is that we have to resist both of those impulses and live somewhere in the middle. As you know, critical and pragmatic reflective adopters. Right. Who are, you know, excited to try new things, but also being critical about what those things are doing and how they work.
Christopher: And that's the big throughline that I see in computers and writing scholarship and as a community, as a field is that critical pragmatic focus on what we can do and maybe what we shouldn't do.

Other scholars focused on the expansion of literacy standards and instruction to include producing non-discursive content that underpins and facilitates the production of texts in digital environments. For instance, Sorapure (2006) highlights four categories of composing practices in Flash, a commercial product, that contribute to meaning making, including the text and images displayed to readers/users and the code and comments underpinning the structure of the composition, making the content possible, and enabling social interactions between developers. When Sorapure (2006) published her article, these categories posed a significant challenge to what it meant to be a text, a writer, and a user—blurring established boundaries by shifting the roles depending on the specific use and moment of composition and consumption. Similarly, Cummings (2006) emphasized the importance of viewing coding (specifically markup languages, like XML) as an act of writing albeit requiring expanded audience considerations as the “coder’s first audience is a machine” (p. 434); unless the machine can comprehend the code as written, the content will not be visible and usable by human audiences. As Eyman and Ball (2014) also detail, digital compositions require a greater range of literacies to produce successful compositions, including proficiencies in technical infrastructural support for the development and dissemination of multilayered digital compositions. Designers of digital compositions needed to integrate optimal coding, metadata, file formats, and other technical affordances to compose usable, accessible, and visually appealing webtexts and other digital designs (Eyman & Ball, 2014). To improve their products, writers of code, like writers of text, also “refine ideas” (Cummings, 2006, p. 433) to achieve their desired output. Interestingly, as Lancaster (2023) highlights, chatbots like ChatGPT also refine ideas by remembering a certain amount of content from a conversation or interaction with a user and building upon it to provide more targeted responses.

The challenges to traditional concepts of writing and authorship that scholars have been grappling with over the last two decades (and before) have been magnified exponentially by the introduction of generative AIs like ChatGPT. In his discussion of new and challenging forms of visual compositions, Kress (2005) argued that writing would remain a powerful force because “elites will continue to use writing as their preferred mode” (p. 18); however, the introduction of AIs like ChatGPT destabilizes that mode of communication making it not the province of human elites but of the machine—open and accessible to all who can use it effectively (Byrd, 2023). When generating content with an AI, writers direct the output in part by authoring appropriate prompts directing the AI to respond according to specified parameters. While collaborating with the AI, writers are also collaborating in remove with the creators and writers of the content on which the AI was trained. As Lancaster (2023) explains, AI language models such as ChatGPT respond "in a predetermined way, based on its trained model, the input data, earlier parts of the conversation and a random number, known as a seed” (p. 3). Creating the best strategic input for the AI in the form of successful prompt engineering is essential to facilitating the most effective and relevant output from the AI, recalling Cummings’ (2006) identification of the machine as the first audience for code-based compositions. As Cummings (2006) noted in relation to writing code, “The act of writing for the machine and writing for a human audience develop similar skills, and one experience can be harnessed to inform the other” (p. 442). However, in this case, the AI also participates as an author, a conversant, and an active collaborator in producing texts with human actors.

The participation of generative AI in the composing process further disrupts established norms related to writing and authorship by prompting a reassessment of the standards by which any composition is evaluated as “original writing.” This aspect of working with generative AI like ChatGPT has, as of this moment in 2023, caused the most widespread consternation in scholarly communities and the public. However, examining past scholarship in digital composition reveals that questioning notions of individual authorship and plagiarism preceded the introduction of generative AI. As noted above, proficiency in digital composition requires collaborating and coauthoring with digital tools to produce multilayered content, some of which is only readable by and may be developed through machines. The codes, metadata, and other digital constructs may be derived through using models and templates produced by human and nonhuman actors, which, as Johnson-Eilola and Selber (2007) argue, forces us as writing teachers to reconsider our standards for identifying and evaluating authorship, originality, and plagiarism. Johnson-Eilola and Selber (2007) highlighted the importance of assemblage in digital composition—using existing codes, templates, digital objects, and even texts and reconfiguring and repurposing them to solve specific problems or address contextual needs (see a more recent discussion of assemblages in Yancey & McElroy, 2017).

This reconsideration involves revisioning of authorship, originality, and even creativity (Johnson-Eilola & Selber, 2007, p. 400). Authoring with generative AI extends and magnifies these redefinitions, because AI technologies now have the potential to produce content more creatively; as such, these technologies are no longer just a tool that aids the human user to achieve their ends. A recent publication by Johnson-Eilola and Selber (2023) emphasizes this point—that agency is no longer only human—people have to think like a technology to be successful users in “complex sociotechnical systems” (p. 83). They argue for an object-oriented ontology (OOO) that focuses on the creative reuse of digital elements to make new assemblages to solve communication and design problems. Particularly relevant for working with AI is Johnson-Eilola and Selber’s (2023) admonition that “OOO asks us to think like objects, decentering ourselves or flattening the normally hierarchical ontologies that put humans on top” (p. 86). In our new writing and communication environment, humans certainly cannot presume to be paramount in creativity, production, or importance. Rather than focusing their efforts on detecting students’ use of AI with software or embedded watermarks (Lancaster, 2023), an effort that promises frustration resulting from a never-ending cycle of design, detection, and redesign of AI, instructors should focus on teaching students to productively use AI for specific sorts of writing and design tasks (Lancaster, 2023) and seek AI chatbots developed by more ethical human actors (Byrd, 2023). As Lancaster (2023) concludes, assignments that can easily be completed by AI should be rethought and potentially eliminated.

Lance Cummings, Associate Professor, University of North Carolina Wilmington, addresses the challenges to concepts of authorship and plagiarism posed by generative AI. He views digital technologies from a posthumanist perspective, arguing the humans and machines mutually construct and are constructed by their technologically facilitated interactions.

Video Transcript

Lance: When I think back to how the field of computers and writing has shaped the way that I think about and adopt new technologies, I go back to the article by James Porter called "Why Technology Matters [to Writing: A Cyberwriter's Tale]" (Computers & Composition, 2003), Specifically looking at how our literacy narratives are structured by the influence of technology, which is often invisible in the way that we think and talk about writing and the writing process.
Lance: For example, simply the act of training somebody to have good handwriting, an example from Porter's article, isn't just about the act of writing. It's about how that technology is used ideologically or to instill discipline in students.
Lance: And one thing I'm constantly reminding myself, other teachers, and students as they engage with generative AI in the classroom and engage with conversations about generative AI is that how we approach these technologies is ultimately shaped by the beliefs and cultures around us that have informed our own mindset towards writing. So many of the conversations that we have about plagiarism or how not to use AI are informed by many of these attitudes.
Lance: I've actually created three categories that help me think about how to approach other people when talking about generative AI. So these are three categories I usually think about: There's the humanist, the technologist, and the posthumanist.
Lance: Humanists are extremely common and focus on defining humanity as the human ourselves, as the mind, the free will, rationality. Often not thinking about the interactions that we have with technology or how technology actually shapes the way that we think, or how it shapes how we deploy our free will—those kinds of things. And so oftentimes, humanists are really concerned about AI taking over these human qualities that we have.
Lance: Then you have the technologists which are seeing new technologies as useful tools, generally. So for example, AI can be a really useful tool and help us communicate more clearly or come up with ideas, integrate into the writing process in different ways. In general, AI is seen as a positive thing that can really help us become more human, enhance our humanity in some way.
Lance: And the third way is posthumanism. And this is really where I come across on this spectrum in thinking about technology and humanity as an interaction. That humanity and technology are co-constitutive. We shape each other. The technology around us shapes how we become human, how we think, how we make choices, how we decide to interact with other people. But we also shape technology. And for me, the big difference here is not necessarily the technology.
Lance: AI has been around for a while now. What's changed is the access to AI, and our ability to shape AI in different ways. So it's really not about generating text or having AI take over the writing process. What these new technologies, these new generative technologies, do for us is it allows for every person to shape how AI is functioning, in the writing process or elsewhere.
Lance: You can think of this kind of akin to the shift from web 1.0 technologies to web 2.0 technologies. There was a huge mindset shift. Once people were able to write and post their own web pages instead of submitting their content to a web master.
Lance: And that is the change that we're dealing with today. And it's important for us as writing and computers scholars to not just think about the technology or about how we are incorporating it into the writing but also thinking about how it's shaping our beliefs, our cultures, our attitudes, and how we can help students take that bigger leap to thinking about what they can do to influence the world through the shaping of technology, not just the use of technology.

Writing with technologies in digital environments has also highlighted the need to examine the infrastructure—technical, cultural, and organizational—that do and do not enable the technologies to function, be accessible, and participate in communicating content with human users. In considering the importance of infrastructure concerns related to digital composing, past scholarship again provides useful insights. As DeVoss et al. (2005) argue, we notice infrastructure at points of breakdown and disruption. DeVoss et al. (2005) highlight the when of infrastructure—and the systems that construct it and determine its use—that comes into play at points of disruption and continues to evolve with attempts at use and intervention when composing. Infrastructure is ubiquitous and relies on standards and other policies, often textual in nature (Frith, 2020), that facilitate its function and imbue the system with values, perspectives, and ideologies. DeVoss et al. (2005) highlight the importance of examining the “often invisible issues of policy, definition, and ideology” (p. 16) that underpin infrastructures essential for digital compositions and composing practices. More recently, Frith (2020) provides an example methodology for investigating such technical standards in his study of the Tag Data Standard “which is the major standard for the Electronic Product Code and a key piece of the Internet of Things” (p. 403). Frith (2020) highlights the role of standards written by people as “discursive infrastructure” (p. 403) in making the physical and technical assets of digital spaces function but in a way that is generally invisible to humans interacting with those technologies:

Obviously, for people who create standards, these documents are a major part of their job. The standards seemingly disappear, on the other hand, when their guidelines are built into material objects and rendered invisible to end users. And related to relationality, one of my major arguments in this article is that technical standards show how writing can become an infrastructure upon which other infrastructures are built. Take the Internet as an example. The Internet is enabled by layers upon layers of material infrastructure, including cables, modems, and so forth. Those material infrastructures are built upon and shaped by international standards documents. (Frith, 2020, p. 406)

As DeVoss et al. (2005) and Frith (2020) both emphasize, numerous texts contribute to building technological infrastructures that enable digital compositions, including policies, standards, and codes.

Users often only pay attention to and consciously investigate infrastructures when they break down or prevent them from performing desired tasks; the scholarship of DeVoss et al. (2005) and Frith (2020) alert us to examine the structural texts that help to yield such disruptions and locate the strategies and ideologies behind them. Generative AI is no different—the technical standards that make it work come into question when the technology causes problems or functions in an unexpected or seemingly aberrant manner. For example, generative AI can “hallucinate,” meaning that it produces rational and real-sounding content that is actually false (Lancaster, 2023). Hatem et al. (2023) cite Chat GPT 3.5’s own definition of hallucinations in which the AI explains that such false information is generated when “a machine learning model, particularly deep learning models like generative models, tries to generate content that goes beyond what it has learned from its training data” (p. 2). The modeling done by generative AIs to produce text responses based on established paradigms without concern for content veracity could be seen as an extreme version of the need to harness identifiable genres for participatory communication as outlined by Bawarshi (2000). As Hatem et al. (2023) emphasize, AI hallucinations pose serious consequences for those relying on AI for healthcare information; such problems are structurally part of how current generative AI functions, making it crucial for humans to interrogate the veracity of the information they receive. As a side note, using the word hallucinations instead of the more accurate misinformation “is inaccurate and stigmatizing to both AI systems and individuals who experience hallucinations” (Hatem et al., 2023, p. 2).

That the technical constructs running generative AI result in the dissemination of false information highlights the need for closely examining the infrastructure, including the standards, making these technologies work. As we write this in October 2023, the Biden Administration issued an Executive Order requiring standards for AI safety and security (White House, 2023). This executive order addresses some of the issues that have been raised regarding the dangers of AI identified by those who have worked on developing and using the technologies (e.g., Hatem, et al. 2023). For instance, the Executive Order requires that “developers of the most powerful AI systems share their safety text results and other critical information with the U.S. government” and “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy” (The White House, 2023). Additionally, the National Institute of Standards and Technology (NIST) is in the process of monitoring, developing, and encouraging adherence to standards for AI. As their website reflects, these standards are in flux, and AI continues to evolve. The research in our field proves useful in providing all scholars and writing instructors with the motivation, methodologies, and theoretical perspectives to use to examine infrastructures, including those important for and in development around AIs.

The final category in this section relates to the professional development needed to engage productively with and help students to learn how to work effectively with digital composing technologies. Parallel with the calls discussed above asking for writing scholars and instructors to redefine composition practices to include coding and collaborations with technologies is the recognition of the significant learning curve that scholars and instructors face. For example, Sheppard (2009) highlights the additional efforts required of faculty in computers and writing and digital rhetoric to help students to productively learn to use new information and communication technologies in a sophisticated way requiring the “work of theory, analysis, and argumentation” (p. 123) related to these technologies. Sheppard (2009) highlights the skills needed to develop useful multimedia texts—both new literacies and technological skills. Generative AI makes even more demands through exponential developments in technological complexities coupled with a lack of transparency in terms of the design and structures that constitute these new technologies; as a result, a robust collaborative peer education environment is needed to meet this challenge (Byrd, 2023; Korzynski et al., 2023; Lancaster, 2023). As the immense scholarly interest around writing with AI demonstrates, scholars and instructors in computers and writing and digital rhetoric have embraced their responsibility to help their students navigate working successfully with this and all new information and communication technologies just as they did in previous eras (Sheppard, 2009).

Access, Accessibility, and AI

Another theme within the scholarly tradition of computers and composition is access and accessibility. Digital access, we tend to believe, is ubiquitous, that everyone has access to the Internet and to information. While the digital divide has decreased since the year 2000, it continues to affect the same demographics now as it did then. Grabill (2003) explains this divide by noting how it affects people from diverse backgrounds: “In terms of these broad demographic categories, the divide still exists and is still highly correlated to income, education, and race/ethnicity” (p. 462). Composition scholarship has addressed access many times, most notably in a special issue of Kairos: A Journal of Rhetoric, Technology, and Pedagogy called Accessibility and Multimodal Composition (Yergeau et al., 2013). In this section, we extend previous discussions of access and accessibility as they relate to Generative AI.

Access is an issue with many previous writing and communication technologies and refers to users' ability to obtain the machine, the tools, and/or the information needed to complete tasks that access claims to provide. Without the fundamental tools to access information and engage in critical thinking about information found in online environments, students' educational growth is stifled, leaving them unprepared for rapidly changing workplace and organizations contexts. At the college level, we know that some universities and departments within universities provide more access than others. For example, Atkins and Reilly (2009) note this imbalance in their local ecology:

However, although students in our writing courses are often not aware of the roadblocks imposed by university infrastructures and institutional politics to developing sustainable new media composition initiatives, they are certainly cognizant of the personal consequences of these impediments: inadequate resources, inconvenient or irregular technological access, and inconsistencies in educational experiences across the same degree program. (n.p.)

We recognize the importance of access not only within college structures but in rural communities (among other kinds of communities) where access to the Internet, for instance, remains significantly impeded.

As many scholars of rhetoric and writing have highlighted for decades, African Americans and other communities of color have been systematically excluded from full access to and representation in digital environments. To redress these exclusions, scholars, including Adam Banks (2005), have proposed concrete strategies to expand access to and inclusion in digital spaces. Banks (2005) argues that African Americans have pursued “transformative access” to digital technologies and power through using these technologies. Banks' (2005) approach parallels that of our chapter: he examines African American rhetorical strategies in a variety of historical contexts, highlighting those that can be harnessed for interventions in current digital spaces. Charlton McIlwain (2020) also draws on history to demonstrate the often-overlooked participation of African Americans in the development of computer and digital technologies over time as well as the U. S. Federal government’s use of computer technologies to surveil and disrupt the Civil Rights Movement. Likewise, Brinker (2009) uses case studies of nine digital diaspora communities to examine how they harness online access and digital spaces to forge community, galvanize around their identities, and affect change in their current environments and that of the countries that they may have left. Her approach provides a framework for exploring the work of diasporic communities in digital spaces.

While access to digital devices and the Internet continue to improve, there are other, sometimes, invisible challenges related to access. Access becomes complicated further when we, more specifically, investigate who has the access to knowledge about how interfaces are and can be constructed and who can affect algorithms that control the data delivered in digital environments. As Grabill (2003) notes when referring to similarities between multiple variables that influence digital access to information:

Understanding such a complex of connections allows for the development of a rhetoric of the everyday that has theoretical power and empirical relevance. It allows some understanding as to how culture is constructed, how identity is conceived and practiced, and how any number of public acts of persuasion are carried out and given meaning within concrete (and discursive) contexts. (p. 458)

Arola (2010) interrogates this hidden content that leaves some users disempowered when working with design templates, such as those rolled out by Web 2.0 technologies and social media platforms like MySpace and Facebook. As Arola (2010) points out repeatedly from the title to her piece, “The Rise of the Template, the Fall of Design,” to her explanations of how users are “discouraged” from attempting to make adjustments or design decisions with the platforms of Facebook and MySpace:

In spite of what seems to be pedagogical attention toward modes beyond the alphabetic, we need to acknowledge that in practice Net Generation students, as well as ourselves, are discouraged in Web 2.0 from creating designs. We are certainly posting information, but this information has become “content” placed in a “form” beyond the user’s control. (p. 6)

Thus, when we talk about access, we also mean not just access to the machine, to the mobile, or even to platforms, but rather to the ways that any of them operate below the surface. Arola (2010) explains that what is missing from social media platforms is access to creatively alter designs and make the space a place of their own. Access to algorithms, interfaces, design, and other tools that may be considered “back-end” operations are rendered invisible by labeling them as confidential and proprietary or accessible only to those with specialized knowledge. In the case of Web 2.0 and social media platforms, design is lost, is invisible, and inaccessible.

Similar issues related to a lack of access to basic aspects of how technologies function and a related inability to control their output manifests when working with generative AI. For example, most commercial chatbots do not reveal the corpus used to train their AI—thus users cannot interrogate the content drawn upon to produce the output they receive—giving them an incomplete understanding of the rhetorical context for the information they receive. This is a different but related sort of access problem to those described above. McKee and Porter (2020) acknowledge the inability of AIs to address a rhetorical context of communication between machines and between the machine and humans. Generated output from AI can be unpredictable and lacking considerations of ethics and rhetorical principles:

The ethics of human-machine writing requires of both humans and machines a deeper understanding of context and a commitment to being a good human, a good machine, and a good human-machine speaking well together. (McKee & Porter, p.111)

They highlight a key problem with both past technologies and generative AI: generative AI, like many new technologies we have encountered before, is rhetorical because of its dependence on users, speakers, or writers and has no ability to understand ethics and the rhetorical situation. McKee and Porter (2020) argue that AI like past technologies cannot and do not address rhetorical concerns, noting that Microsoft's Twitterbot, for instance, was made available to the public without any contextual knowledge “particularly, of what constitutes racism, sexism, homophobia and anti-semitism” (p.111), thus creating a communicator who lacked rhetorical knowledge or any way of considering it when generating responses initiated by a user. Users’ lack of access to the functioning principles and texts informing the output of the AI makes them lack agency and be forced to accept output informed by a repository of bigoted texts, as Byrd (2023) highlights. No training in prompt engineering (discussed below), which also presupposes access to instructional resources or informed teachers, can fully protect users and prevent encounters with content based upon the vast expanse of biased texts on which the AI was trained. As Byrd (2023), argues, the refusal of OpenAI to reveal details of their chatbot's architecture, supports the idea that “ChatGPT may not be an ethical tool for our purposes as writers and researchers” (p. 138).

In contrast, generative AI may promise some advances when it comes to providing accessibility in digital spaces for individuals who experience content differently or have issues with visual processing. In previous scholarship that, as a result, uses older types of approved terminologies, Browning (2014) expands on two models of disability: the medical model of disability and the social model of disability. She writes, “Many efforts at accommodating individuals with disabilities, though often well intentioned, coincide with a medical model of disability in that accommodations are simply added on to existing structures and systems” (p. 98–99). To provide better access and accessibility, Browning says, “Rather than simply retrofitting our universities, our classroom spaces, and our pedagogies, we must actively integrate disability, in thoughtful and critical ways, into all aspects of our teaching” (p. 99). Wood (2017) agrees, arguing that the basic conceptions of time that structure in-class writing and longer writing projects developed outside of class need to be rethought to assist all students and that the policies developed should be created with student input. Furthermore, Fox (2013) argues that focusing on disability studies in composition classrooms helps to highlight the mind-body connection embedded but often elided in the use and development of digital technologies and foreground the ways that universal design helps to increase accessibility for all users.

Henneborn (2023) notes that society has not done well when it comes to providing people with disabilities accommodations or acknowledging a disability that prohibits a user from participating in technologies of the workplace as she notes that the history of the workplace has not been thoughtful when addressing accessibility:

“We haven't done well as a society with the digital divide that exacerbates the barriers between persons with disabilities (as well as other marginalized communities) and others” (n. p.). As Kerschbaum (2013) highlights, even multimodal texts that are partially accessible by, for example, providing a transcript of a video, prove to be inaccessible overall because another part of their content, such as the images or navigation, are essential and yet not designed to be accessible and so restrict readers with particular disabilities from using the content.

In contrast to access, which is complicated by generative AI, these new technologies may have some potential to support accessibility in innovative ways. Businesses, corporations, and other institutions are ready for generative AI and the emergence of ChatGPT to provide hope for their employees who identify with disabilities. At its 2023 Ability Summit, Microsoft outlined its plans to employ AI tools in its products, such as Office 365, to alter contrast for some users and generate descriptions of images on demand for others (Cuevas, 2023). Henneborn (2023) argues that generative AI can address some accessibility challenges by creating “inclusive interfaces.” Tools like keyboard navigation, alternative text, voice-enabled interface/speech-to-text, text/image-to-speech, color contrast, dyslexia-friendly fonts, and clear language are all considered basic requirements for inclusive interfaces and appropriate accessibility and all can be enhanced by AI. Henneborn (2023) offers a few examples of such tools: “For instance, Google’s Dialogflow has built-in integration with Google Cloud Speech-to-Text API, allowing developers to create chatbots that support voice-enabled input” (n.p.). Dialogflow CX is the “advanced” edition and Dialogflow ES is the standard. This tool allows individuals to create chatbots and/or voicebots, and while this tool is not free, there appears to be a free trial (https://cloud.google.com/dialogflow). Another AI-powered tool is Be My Eyes or what may also be referred to as Be My AI (https://www.bemyeyes.com), which is visual assistance for users with low visibility, and this particular tool is used by a number of large corporations like Verizon, P&G, Google, and Microsoft, to name a few. Generative AI also seems to have the ability to aid users who experience dyslexia by supporting additional add-ons or plug-ins. For example, Dyslexie Font is a plug-in designed to address dyslexia by making reading and understanding easier (https://www.dyslexiefont.com). While many of these tools to assist with accessibility may currently be free, we know from the development of past technologies that monetization is almost inevitable. Support for accessibility will inevitably collide with issues of access when commercial enterprises seek to make money from the developing technologies. Even ChatGPT currently has a “freemium” model where users can use ChatGPT for free, but if one wishes to extend its use to other tools or to a “premium” version then one would need to pay for an upgrade.

More information about the many effects of generative AI on access and accessibility will be known as we continue to deploy these systems in more contexts. As the technology develops, we need to draw upon the lessons from our scholarly tradition that tell us to interrogate the rhetorical context in which these technologies operate and examine the access users have to their architecture and to the full range of affordances that enable creative and innovative composition and use. The next section focuses on some pedagogical approaches to these issues through examining how scholars have proposed interacting with other technologies, such as Wikipedia.

Masking Mediation

Throughout the history of computers and writing and digital rhetoric, scholars have grappled with and embraced disruptive information and communication technologies and the challenges they pose to their contemporary research and composing practices. As Bolter and Grusin (1999) and Bolter and Gromala (2003) highlight, new technologies often obscure their mediative practices, promising to provide direct, unmediated access to information, entertainment, and production. Any user’s understanding and awareness of a digital technology's process and depth of mediation—the degree to which the technology draws attention to its means of production, to how it provides access to content or delivers result&mash;is partly determined by users’ prior experience with similar technologies, proficiency with using new technologies, and curiosity about how the technologies work. Communication and information technologies gain power in part by obscuring the degree of mediation and promising users that they will provide direct access to knowledge and information without requiring users to comprehend how the content is developed and delivered. One way to accomplish this transparency is to use the structural conventions of established communication technologies to achieve transparency (Bolter & Grusin, 1999; Hocks, 2003).

The way that numerous scholars in computers and writing and digital rhetoric have approached teaching with Wikipedia since its inception in 2001 provides a productive model for navigating and composing with technologies that downplay and obscure their processes of mediation like generative AI; as a result, we will explore that model throughout this section. Like Wikipedia, generative AI can be approached as a transparent window into vast amounts of easily accessible knowledge without the user’s need to understand the technical mechanisms facilitating its production. Both Wikipedia and generative AI proffer information that appears reasonable and professional (Lancaster, 2023), persuading users, like our students, to accept the output at face value without critically examining its veracity.

To combat the seductive transparency of technologies like Wikipedia, scholars developed structured inquiry and assignments designed to transform student users from passive consumers into critical producers of content. In the case of Wikipedia, this requires individuals to work behind the surface of the encyclopedic content as displayed to understand the layers contributing to and supporting its production, including the organizational structures and policies, the debates within the Wikipedia community, and the technical knowhow needed to contribute correctly formatted content. As Reilly (2011) explains in her article about teaching Wikipedia as a “complex discourse community and multi-layered, knowledge-making experiment,” empowering student users to become critical producers of content with Wikipedia requires that they literally look behind article layer of the text to interact with and contribute to the layers of the Wiki technology that allow them to engage in conversation with other contributors (Discussion tab), edit the content of the article (Editing tab), and examine the complete history of changes to the text (History tab).

Based on their analysis of large-scale (6000 students) survey research by the Wiki Education Foundation, Vetter et al. (2019) recommend best practices for Wikipedia writing assignments that include making the assignments “extended and substantial” to allow students to “learn about values, processes, and conventions of Wikipedia” (p. 62). Vetter et al. (2019) also recommend critical analyses of Wikipedia articles before contributing in order to develop critical thinking around how Wikipedia is designed, how content is supported, and how sources are cited. To design such opportunities for analyzing and contributing to Wikipedia, instructors need professional development to learn the content and technological intricacies of the platform (Sura, 2015).

In addition to analyzing the content and contributing to it, McDowell and Vetter (2020) argue that Wikipedia’s very practices and policies, particularly those requiring citation and verification for information, serve a pedagogical purpose and can be harnessed to aid students to develop information literacies related to the legitimacy of online information. The polices prompt students to learn how to analyze information for its veracity—not rely on others (McDowell & Vetter, 2020; see also Azar, 2023). In addition, Wikipedia has the benefit of being a community governed by the collective (run by a nonprofit): new participants can learn to navigate the norms (McDowell & Vetter, 2020) and work with other contributors in asynchronous but interactive manner. As Purdy and Walker (2012) explain, contributing to wiki-based compositions foreground the importance of dialogue for knowledge production. McDowell and Vetter (2020) argue that Wikipedia's policies requiring verification, a neutral tone, collaboration, and citations educate new users and enlist them to maintain the legitimacy of content on the site and participate in removing or questioning that which is not supported by sources and verifiable knowledge (Azar, 2023)—through such policies, contributors to Wikipedia develop critical digital and information literacies that they can employ in other contexts. Finally, Wikipedia uses community-based policies “to reconstruct more traditional models of authority” that support the legitimacy and veracity of the content; and Wikipedia is transparent about its purpose and intentions unlike most other (commercial) sites and apps online (McDowell and Vetter, 2020).

Many of the lessons outlined above related to working with Wikipedia and exposing its processes of content mediation to conscious examination and interrogation can be adapted to help our students work with and critically analyze the output of generative AI. This process can begin by examining how generative AIs produce content. Byrd (2023), Lancaster (2023), and many others help to demystify how AI technologies like ChatGPT work from a technical perspective for instructors and students. As was the case when teaching with Wikipedia and other new technologies, highlighting the technology’s processes of mediation entails gaining a basic understanding of the technical specifications that power it. Students need to learn that ChatGPT, for example, is a large language model (LLMs) meaning that it has been trained to produce new language modeled on the texts it has processed and was asked to generate (Byrd, 2023; Lancaster, 2023). As Byrd (2023) clearly explains, “[LLMs] have really created mathematical formulas to predict the next token in a string of words from analyzing patters of human language. They learn a form of language, but do not understand the implicit meaning behind it” (p. 136). As a result, AI can produce false information when the corpus it uses does not contain the accurate content (Byrd, 2023; Cardon, et al., 2023; Lancaster, 2023; Hatem, 2023). Understanding these technical processes can help students to approach the output from AIs more critically and skeptically, just as they are taught to do in relation to Wikipedia. Such instruction provides inoculation, demystifying the output and opening it to interrogation.

Edzordzi Agbozo, Assistant Professor, University of North Carolina Wilmington, describes innovative assignments that he uses to help students interrogate the output of generative AIs.

Video Transcript

Edzordzi: Hello Colleen and Tony. You gave me a question about some central lessons or a central lesson that our field of computers and writing could learn from—could use lessons from the past in order to engage with the new phenomenon of generative AI and its upsurge in writing.
Edzordzi: My response is that the field of composition has always been intentionally engaged with technologies. And so when I saw your question, Jason Palmeri and Ben McCorkle's 2021 book on writing pedagogy over the last 100 years came to mind.
Edzordzi: Scholars in the field have always demonstrated that technology is their resource. So I think that we should continue treating writing generative AI as a resource going forward. Specifically, I have two lessons or thoughts on the question.
Edzordzi: The first is that we must continue to interrogate writing itself, based on the research and practices of the past. We have come to blur the differences between writing as alphanumeric scribbling to writing as a very complex technology mediated form of communication. So in this moment of ChatGPT and then generative AI tools, I think that we need to revisit this question of writing and think of writing as a post human activity. A post human activity that involves, to a large extent, generative AI. But also help our students to understand the limits and the limits in that relationship between the generative AI and writing.
Edzordzi: Two: The lessons from the past. Sure, as that our field has not rejected technologies for the image. So I would say we must incorporate generative AI into our pedagogy. Generative AI has become a significant tool of our time, and whether we like it or not, our students are going to use them. So it is important that we bring this into our classroom and then curate how students can have some kind of relationship with this technology.
Edzordzi: In my own class I do this in two ways: One, comparative AI texts and student text analysis. So in some of my classes, students generate responses to prompts and feed the same prompts to ChatGPT. Then they compare the AI writing in response to the prompt to ChatGPT's writing the response to the prompt and then analyze. They analyze to look for style, vocabulary, complexity of vocabulary, citation, and all those other things that we look for when we look at writing.
Edzordzi: The second is brainstorming activities. In some of my classes, I use ChatGPT as the first tool to think about a topic that students want to write on. So we would put our topics into ChatGPT and ChatGPT generates the ideas related to the topic. And then students now form a topic out of the various ideas that ChatGPT has produced.
Edzordzi: These two activities help students to understand that ChatGPT could be useful, but it still needs the human creative elements that would make writing a complex, exciting activity which at this moment ChatGPT doesn't have or generative AI doesn't have.
Edzordzi: So going forward, I would say or in conclusion, I would say I think the field needs to revisit what it means to do writing at this moment, and hopefully in the future. And two, we must continue to incorporate generative AI into our classes, and help students form an ethical relationship with these technologies before they step out into the world.
Edzordzi: Thank you very much.

Students also need to be taught the protocols for productive use of generative AI as they do with Wikipedia. Prompt engineering is the process of iteratively developing instructions and queries to submit to the AI to garner superior output (Korzynski et al., 2023; Lo, 2023). As Korzynski et al. (2023) emphasize, prompt engineering is a human language process requiring collaboration with AI. Just as students must learn how to structure and tag their Wikipedia articles to meet the genre specifications approved of by the community, students also have to structure queries to gain the best results from the AI. They also need to dialogue with the AI as they did with other contributors in the Talk tab in Wikipedia to participate fully in a successful collaboration. The obvious difference is that when writing for Wikipedia, students collaborate with other users, not with AI. A number of scholars have developed frameworks to guide prompt engineering. For example, Lo (2023) outlines the CLEAR Framework: prompts to AIs should be concise, logical, explicit, adaptive, and reflective. Importantly, this framework emphasizes that success is produced iteratively and contextually in response to the output of the AI and purpose of use (Lo, 2023). Korzynski et al. (2023) review a range of other similar approaches to prompt engineering; they outline the essential elements of useful prompts, including the context or role, the instruction or task, the relevant data, and the genre or form for output (p. 30). Such discussions of prompt engineering emphasize that scholars, instructors, and students can learn to productively collaborate with generative AI, as they do with Wikipedia and its corresponding community, and overcome hurdles to engaging with it productively.

Just as scholars recommend critically analyzing Wikipedia articles prior to using content from and contributing to Wikipedia, so must students learn to do the same with the output of generative AIs. Lancaster (2023) recommends finding sources to corroborate and support the veracity of content generated by AIs. Scholars already report developing assignments that ask students to investigate the veracity and usefulness of text produced by an AI (Byrd, 2023). As noted in the previous section, once students understand that the quality of output is driven in part by the quality of the input, they may gain agency and confidence to critique the resulting information produced by that AI.

Some of our lessons from teaching with other technologies like Wikipedia do not apply to generative AIs. For example, as discussed above, the mechanisms of mediation by generative AI are often proprietary, making it impossible to fully comprehend how the technology delivers content as we can with Wikipedia. AI companies are commercial enterprises unlike Wikipedia, which is a nonprofit. In response, Byrd (2023) recommends using open-access LLMs instead to produce more ethically and fully transparent content. Finally, generative AI’s rapid evolution may eventually make detecting its output as the work of machine-generated mediated content less possible by human readers. Lancaster (2023) proposes a process of adding watermarks to AI-generated content but acknowledges the potential futility of that approach. Additionally, this approach would require coordination and cooperation with corporate entities that, as noted above, maintain control over their technologies and standards and have little to gain from revealing their proprietary information and demystifying the power of their chatbots to magically anticipate users' needs and surprise them with content they can use as their own.

As the above discussion reflects, the work of scholars and instructors with previous technologies like Wikipedia can provide insights about what questions to ask and how to advocate for restrictions and guidelines to protect students and the public in their work with generative AIs. Developing educational policies and best practices around writing with and using digital content, like Wikipedia, was necessary, and invested scholars in our profession need to do the same for AI.

Gavin P. Johnson, Assistant Professor, Texas A&M University, Commerce, emphasizes the importance of remembering the intersections between identity, power, and technology.

Video Transcript

Gavin: When considering the lessons that we can learn from computers and writing and how they can impact our pedagogies with artificial intelligence and generative AI, part of what I really think is important for us to remember are the intersections between identity, power, and technology. Now, a very common thread in the literature in our scholarship is that technologies are never neutral. They are always rhetorical and political and material.
Gavin:And so when we're thinking about how technologies are never neutral, and we apply that to AI technologies, we have to think critically about what is training AI. Not only do we have to think critically about that, we have to prepare our students to think critically about what is training AI, what data is being pulled to train that AI, and how is that training impacting its algorithm and how that impacts the way that students and other writers and composers will interact with the technology.
Gavin: One of the main concerns I have, and this is also something that I think is worth pulling from the history of computers and writing, are issues of privacy and surveillance as they relate to AI. And teaching students about privacy and surveillance is absolutely essential right now.
Gavin: And so one of the big lessons that I hope that we take from the history of computers in writing is to think about how technology is not neutral, how issues of privacy and surveillance need to be interrogated, because privacy and surveillance disproportionately impact and negatively impact marginalized communities.
Gavin: And also just thinking about how do students navigate these complex topics without giving up? I am quite concerned about surveillance apathy, privacy apathy, where students just feel that because they live so much of their lives with technology, there's no getting out of a surveillance culture.
Gavin: And so I think it's important for us to empower students both through our critical teaching but also by engaging students in how to use technologies ethically. And how to think about technologies as ethical tools that can be used for good.
Gavin: The other aspect that I think is particularly important that our field has been really strong and doing, and it's a lesson that I think we need to carry forward with this, is our interdisciplinarity and our thinking about how knowledge is not siloed, but that it importantly crosses physical, discursive boundaries to make new knowledge. So I think it's really important that in this moment we continue to push towards interdisciplinary thinking.
Gavin: Not only searching out technical experts and computer sciences, but also seeking out our colleagues in education and philosophy and really thinking critically about how do we all work together to think through AI and teach with AI?

Conclusion

Our chapter highlights several of the major themes of scholarship related to computers and writing and digital composing and connected that legacy of scholarship to current issues related to writing with generative AI. Numerous additional themes are relevant to this discussion, including issues of privacy and surveillance, which Johnson also highlights in his video response (above) to our question. Our scholarly tradition of interrogating privacy and surveillance in digital spaces is robust and growing and includes numerous recent articles, special issues, and books including a publication in Kairos based on a Town Hall at Computers and Writing in 2015 (Beck et al., 2016), a collection of essays edited by Beck and Hutchinson Campos (2020), and a special issue of Computers and Composition edited by Hutchinson Campos and Novotny (2021). The work in these publications highlights the largely invisible intrusions into privacy and ubiquitous surveillance that individuals encounter when learning, working, and playing in digital environments. They also emphasize the lengths to which the corporations and developers of the technologies will go to hide those risks and keep the infrastructure and algorithms powering the technologies invisible to users and unavailable for scrutiny as proprietary information. This line of scholarship provides guidance for examining the potential threats to privacy posed by generative AI, which, given its newness, are somewhat murky and speculative. Our scholarly tradition warns us to be skeptical and wary, but we, like government researchers (Busch, 2023) and data privacy enterprises (Securiti Research Team, 2023), must extrapolate the potential harms based on what is known about how generative AIs function and how privacy has been compromised by similar technologies. For example, a review of basic privacy scholarships highlights that any corpus of data is vulnerable to hacking, exploitation, or accidental leakage, and AI is no different. The vast stores of data collected to train a Chatbot like ChatGPT have already been compromised and will continue to make users and those whose data has been secretly collected to train AIs vulnerable to the release of personal medical, financial, social, and other information (Securiti Research Team, 2023). As Morris (2020) explains, users of chatbots who have rare disabilities may be at even greater risk of privacy violations when using the AI to seek information to learn about their conditions and seek treatment options. As she notes, “past incidents of re-identification of individuals from anonymized datasets…indicate the difficulty of truly anonymizing data” (Morris, 2020, p, 36). In 2023, not only do individuals with disabilities or illness risk exposure, but so do women seeking reproductive healthcare options and individuals from other marginalized communities, such as youth who identify as trans. Our scholarly tradition helps us to identify risks related to privacy and alert our students, colleagues, and others, but, unfortunately, provides few tangible solutions to these and other significant problems outlined above related to generative AI and future technologies to follow.

That the problems are intractable and yet unknown cannot cause us to give up. As Johnson’s video highlights, as teachers we cannot surrender because we must help our students to resist the apathy that can result from the enormity of the task of negotiating the rapid and risky technological changes that must be faced pedagogically, organizationally, and socially. The scholarship explored above reflects that our field has faced such upheaval before and found ways to work with and through the technologies, whatever form they take. We take inspiration and instruction from that history.

References

Arola, Kristin L. (2010). The design of web 2.0: The rise of the template, the fall of design. Computers and Composition, 27(1), 4–14. doi:10.1016/j.compcom.2009.11.004

Atkins, Anthony T., & Reilly, Colleen A. (2009). Stifling innovation: The impact of resource-poor techno-ecologies on student technology use. In Dànielle N. DeVoss, Heidi A. McKee, & Richard (Dickie) Selfe (Eds.), Technological ecologies and sustainability. Computers and Composition Digital Press/Utah State University Press. http://ccdigitalpress.org/ebooks-and-projects/tes

Azar, Tawnya. (2023). Wikipedia: One of the last, best internet spaces for teaching digital literacy, public writing, and research skills in first year composition. Computers and Composition, 68, 102774. doi:10.1016/j.compcom.2023.102774

Banks, Adam J. (2005). Race, rhetoric, and technology: Searching for higher ground. Taylor and Francis.

Bawarshi, Anis. (2000). The genre function. College English, 62(3), 335–360.

Beck, Estee N., Crow, Angela, McKee, Heidi A., deWinter, Reilly, Colleen A., Jennifer, Vie, Stephanie, Gonzales, Laura, & DeVoss, Dànielle Nicole. (2016). Writing in an age of surveillance, privacy, and net neutrality. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 20(2). Invited publication. http://kairos.technorhetoric.net/20.2/topoi/beck-et-al

Beck, Estee, & Hutchinson Campos, Les, Eds. (2021). Privacy matters: Conversations about surveillance within and beyond the classroom. Utah State University Press.

Bolter, Jay David, & Gromala, Diane (2003). Windows and mirrors: Interaction design, digital art, and the myth of transparency. MIT Press.

Bolter, Jay David, & Grusin, Richard. (1999). Remediation: Understanding new media. MIT Press.

Brinkerhoff, Jennifer M. (2009). Digital diasporas: Identity and transnational engagement. Cambridge University Press.

Browning, E. R. (2014). Disability studies in the composition classroom. Composition Studies, 42(2), 96–117. http://www.jstor.org/stable/4350185

Busch, Kristen E. (2023, May 23). Generative artificial intelligence and data privacy: A primer. Congressional Research Service. https://crsreports.congress.gov/product/pdf/R/R47569

Byrd, Antonio. (2023). Truth-telling: Critical inquiries on LLMs and the corpus texts that train them. Composition Studies, 51(1), 135–142.

Cardon, P., Fleischmann, C., Aritz, J., & Logemann, M. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age, Sage Publications. 1–39 doi:10.1177/23294906231176517

Cripps, Michael J. (2004). #FFFFFF, #000000, #808080: Hypertext theory and WebDev in the composition classroom. Computers and Composition Online. https://web.archive.org/web/20130804180635/http://www.bgsu.edu/departments/english/cconline/cripps/

Cuevas, Zachery. (2023, March 9). Accessibility and AI: Microsoft details its plans for a more inclusive future. PC Magazine. https://www.pcmag.com/news/accessibility-and-ai-microsoft-details-its-plans-for-a-more-inclusive-future

Cummings, Robert E. (2006). Coding with power: Toward a rhetoric of computer coding and composition. Computers and Composition, 23, 430–443.

DeVoss, Dànielle Nicole, Cushman, Ellen, & Grabill, Jeffrey T. (2005). Infrastructure and composing: The when of new-media writing. College Composition and Communication, 57(1), 14–44.

Eyman, Douglas, & Ball, Cheryl E. (2014). Composing for digital publication: Rhetoric, design, code. Composition Studies, 42(1), 114–117.

Fox, Bess. (2013). Embodying the writing in the multimedia classroom through disability studies. Computers and Composition, 30, 266–282. doi:10.1016/j.compcom.2013.10.003

Frith, Jordan. (2020). Technical standards and a theory of writing as infrastructure. Written Communication, 37(3), 401–427.

Grabill, Jeffrey T. (2003). On divides and interfaces: Access, class, and computers. Computers and Composition, 20(4), 455–472. doi:10.1016/j.compcom.2003.08.017

Hatem, Rami, Simmons, Brianna, & Thornton, Joseph E. (2023). A call to address AI “hallucinations” and how healthcare professionals can mitigate their risks. Cureus, 15(9), e44820. doi:10.7759/cureus.44720

Hawisher, Gail, LeBlanc, Paul, Moran, Charles & Selfe, Cynthia. (1996). Computers and the Teaching of Writing in American Education, 1974-1994: A History. Ablex Publishing.

Henneborn, Laurie. (2023, August). Designing generative AI to work for people with disabilities. Harvard Business Review Online. https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities

Hocks, Mary E. (2003). Understanding visual rhetoric in digital writing environments. College Composition and Communication, 54(4), 629–656.

Hutchinson Campos, Les, & Novotny, Maria (Eds.). (2021). Rhetorics of data: Collection, consent, & critical literacies. Special issue of Computers and Composition, 61.

Johnson, Gavin P. (2023). Don’t act like you forgot: Approaching another literacy “crisis” by (re)considering what we know about teaching writing with and through technologies. Composition Studies, 51(1), 169–175.

Johnson, Gavin P., & Agbozo, G. Edzordzi. (2022). New histories and theories of writing with/through technologies: A review essay. Composition Studies, 50(3), 158–66.

Johnson-Eilola, Johndan, & Selber, Stuart A. (2007). Plagiarism, originality, and assemblage. Computers and Composition, 24, 375–403.

Johnson-Eilola, Johndan, & Selber, Stuart A. (2023). Technical communication as assemblage. Technical Communication Quarterly, 32(1), 79–97. doi:10.1080/10572252.2022.2036815

Kerschbaum, Stephanie. (2013). Modality. In M. Remi Yergeau, Elizabeth Brewer, Stephanie L. Kerschbaum, Sushil Oswal, Margaret Price, Michael J. Salvo, Cynthia L. Selfe, & Franny Howes. Multimodality in motion: Disability and kairotic spaces. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 18(1). https://kairos.technorhetoric.net/18.1/coverweb/yergeau-et-al/pages/mod/index.html

Kress, Gunther. (2005). Gains and losses: New forms of texts, knowledge, and learning. Computers and Composition, 22, 5–22.

Korzynski, Pawel, Mazurek, Grzegorz, Krzypkowska, Pamela, & Kurasniski, Artur. (2023). Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT. Entrepreneurial Business and Economics Review, 11(3), 25–37. doi:10.15678/EBER.2023.110302

Lancaster, Thomas. (2023). Artificial intelligence, text generation tools, and ChatGPT—Does digital watermarking offer a solution? International Journal for Educational Integrity, 19(10). doi:10.1007/s40979-023-00131-6

Lo, Leo S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. Journal of Academic Librarianship, 49, 102720. doi:10.1016/j.acalib.2023.102720

McDowell, Zachary J., & Vetter, Matthew A. (2020, July–September). It takes a village to combat a fake news army: Wikipedia's community and policies for information literacy. Social Media + Society. doi:10.1177/2056305120937309

McIlwain, Charlton D. (2020). Black software: The internet and racial justice, from the Afronet to Black Lives Matter. Oxford University Press.

McKee, Heidi, & Porter, James E. (2020). Ethics for AI Writing: The importance of rhetorical context. In Proceedings of 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20), New York, NY, USA. https://doi.org/10.1145/3375627.3375811

Morris, Meredith Ringel. (2020, June). AI and accessibility: A discussion of ethical considerations. Communications of the ACM, 63(6), 35–37.

National Institute of Stands and Technology (NIST). (2023, May 4). Technical AI standards. Artificial Intelligence. https://www.nist.gov/artificial-intelligence/technical-ai-standards

Palmquist, Michael. (2010). Tracing the development of digital Tools for writers and writing teachers. In Ollie O. Oviedo, Joyce R. Walker, and Byron Hawk (Eds.), Digital tools in composition studies: Critical dimensions and implications. Hampton Press.

Porter, James, & Sullivan, Patricia. (1997) Opening spaces: Writing technologies and critical research practices. Ablex Publishing.

Purdy, James P., & Walker, Joyce, R. (2012). Scholarship on the move: A rhetorical analysis of scholarly activity in digital spaces. In Debra Journet, Cheryl E. Ball, & Ryan Trauman (Eds.), The new work of composing. Computers and Composition Digital Press/Utah State University Press. https://ccdigitalpress.org/book/nwc/chapters/purdy-walker/index.html

Reilly, Colleen A. (2011). Teaching Wikipedia as a mirrored technology. First Monday, 16(1–3). https://firstmonday.org/ojs/index.php/fm/article/download/2824/2746

Securiti Research Team. (2023, September 21). Navigating generative AI privacy: Challenges and safeguarding tips. Securiti Knowledge Center. https://securiti.ai/generative-ai-privacy/

Selfe, Cynthia L. (1999). Technology and literacy: A story about the perils of not paying attention. College Composition and Communication, 50(3), 411–436.

Sheppard, Jennifer. (2009). The rhetorical work of multimedia production practices: It's more than just technical skill. Computers and Composition, 26, 122–131.

Sorapure, Madeleine. (2006). Text, image, code, comment: Writing in Flash. Computers and Composition, 23, 412–429.

Stevens, Christy R. (2016) Citation generators, OWL, and the persistence of error-ridden references: An assessment for learning approach to citation errors. The Journal of Academic Librarianship, 42, 712–718. doi:10.1016/j.acalib.2016.07.003

Sura, Thomas. (2015). Infrastructure and wiki pedagogy: A multi-case study. Computers and Composition, 37, 14–30.

Vetter, Matthew A., McDowell, Zachary J., & Stewart, Mahala. (2019). From opportunities to outcomes: The Wikipedia-based writing assignment. Computers and Composition, 52, 53–64.

Yancey, Kathleen Blake. (2004). Made not only in words: Composition in a new key. College Composition and Communication, 56(2), 297–328.

Yancey, Kathleen Blake, & McElroy, Stephen J. (Eds.). (2017). Assembling composition. NCTE.

White House. (2023, October 23). White House Fact Sheet: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

Wood, Tara. (2017). Cripping time in the college composition classroom. College Composition and Communication, 69(2), 260–286. https://www.jstor.org/stable/44783615

Wysocki, Anne Frances. (2001). Impossibly distinct: On form/content and word/image in two pieces of computer-based interactive multimedia. Computers and Composition, 18, 137–162.

Yergeau, M. Remi, Brewer, Elizabeth, Kerschbaum, Stephanie L., Oswal, Sushil, Price, Margaret, Salvo, Michael J., Selfe, Cynthia L., & Howes, Franny. (2013). Multimodality in motion: Disability and kairotic spaces. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 18(1). https://kairos.technorhetoric.net/18.1/coverweb/yergeau-et-al/pages/access.html