
The Construction of Authorship and Writing in Journal and Publisher AI Policy Statements
James P. Purdy Duquesne University
Introduction
To say ChatGPT and similar generative artificial intelligence (AI) chatbots have captured the attention of academia is a vast understatement. Indeed, since Microsoft-funded U.S. company OpenAI released ChatGPT to the public on November 30, 2022, popular postsecondary education publications like Inside Higher Ed and The Chronicle of Higher Education have published an article or editorial about ChatGPT in nearly every issue, as of the time of this writing. Not since Wikipedia has there been such panic about a digital technology’s potentially negative effect on education, especially for writing. ChatGTP has even been called a “plague” comparable to COVID-19, a disease that killed millions, with its release characterized as a “superspreader event” (Weissman, 2023).
Stakeholders have probed ChatGPT’s impacts on pedagogy (Geher, 2023; Heaven, 2023), cheating and plagiarism (Cotton et al.; Dehouche, 2021), privacy (Cuthbertson, 2023; Satariano, 2023), labor (Chen et al., 2023), and other areas. Given ChatGPT’s accessibility to students, universities have scrambled to update their academic integrity policies (Barnett, 2023), and, in turn, software developers have hurried to create tools for identifying texts written by generative AI (Heikkilä, 2023; Newton, 2023). Some academic journals and publishers likewise have been quick to draft new or update their existing publication policies in response to generative AI. These policies themselves have yet to receive critical attention, however. This chapter will redress that gap. After briefly identifying affordances and constraints of generative AI like ChatGPT, it situates policy responses to ChatGPT in relation to existing computers and writing scholarship, including Baron (2000), Burns (1983), and Herrington and Moran (2001). The chapter then describes the study method and explains its results, including what ChatGPT itself says about publisher and journal AI policies.
These AI policies merit careful attention for two reasons. First, concerns about generative AI center around its capacity to write prose that reads as if written by a human. We worry about generative AI, in other words, because of its potential to masquerade as human and create intellectual property. Second, all the AI policies analyzed for this study forbid listing AI as an author because generative AI does not meet the definition of authorship. That is, in arguing generative AI cannot be listed as an author, these policies define what authors, and by extension writing, are and should be. We should care about these definitions because they are at the core of our work as computers and writing scholars. Based on content analysis and close reading of ten journal and publisher AI policies published within six months of ChatGPT’s public release, this study reveals that while these policies establish authors as needing to be humans, they construct writing as transactional, in James Britton’s (1982) terms, and as a product to be assessed. Missing from these policies is what intellectual growth and knowledge production are lost when AI writes the prose that circulates in academic publications.
Understanding Generative AI
Understanding the technology of generative AI, like ChatGPT, helps us recognize its affordances and constraints and the resultant ways in which it shapes writing process and product. ChatGPT, short for Generative Pretrained Transformer, is a large language model (LLM), a type of generative AI that predicts and generates language based on its reading of a massive corpus of texts. An LLM can be used to summarize content in its corpus, generate new content based on its corpus, and translate into other languages present in its corpus. Chatbot LLMs like ChatGPT can “converse” with users by responding to their questions or instructions (Kerner, 2023). ChatGPT-3, the free version available at the time of this writing, was trained on a corpus comprising 45 terabytes of data, including multiple datasets as outlined in Table 1. The exact composition of these datasets is unknown.
Dataset | Definition | Percentage of Training Corpus |
---|---|---|
Common Crawl | Lightly filtered raw web page data, metadata, and text gathered over 8 years of web crawling | 60% |
WebText2 | Text from all outbound Reddit links from posts with 3 or more upvotes | 22% |
Books1 | An online corpus of books | 8% |
Books2 | An online corpus of books | 8% |
Wikipedia | Articles from the English language version | 3% |
This expansive corpus results in responses that are sometimes remarkably cogent, informative, and even well written. However, ChatGPT is only as good as the texts in its corpus. Using this corpus as the basis for training can cause several writing problems. First, some of this text is offensive, biased, and flat out inaccurate. Think about the outcry over offensive Reddit content (Hussain, 2009; Mak, 2018) and the skewed demographic representation of Wikipedia contributors (Bear and Collier, 2016; Gruwell, 2015; Lam et al., 2011). Second, ChatGPT heightens DEI (diversity, equity, and inclusion) concerns that have been made more visible in higher education in the last several years. ChatGPT privileges language that follows the rules of its corpus, that is, Westernized English and homogenized language without “accents.” Moreover, the most sophisticated version of ChatGPT as of the time of this writing, ChatGPT-4, requires payment so it is available only to those who can afford to buy it. Those who pay more can get better (i.e., more human sounding) writing. Third, ChatGPT contributes to the “fake news” misinformation culture increasingly prevalent in the last 10 years. It confidently circulates misinformation and replicates biased views. This is not to say that ChatGPT cannot be used productively for writing or writing instruction. It is to say that ChatGPT, like all writing tools, is best used with careful consideration of its limitations. The journal and publisher policies examined for this study seek to outline and ask authors to work within those limitations.
Because computers are built to handle math, generative AI like ChatGPT work by turning language into math. In particular, they attend to probability. As such, generative AI treat humans as pattern producing machines. In the corpus from which ChatGPT learned, for instance, certain words are more likely to follow from and be grouped with other words. Thus, the answers it returns reflect these associations. Words that appear less frequently in the corpus yield poorer predictions than words used more often. ChatGPT generates language; it does not generate new knowledge. At its most fundamental level, it (re)arranges words that appear in its corpus. But it can do so very well.
AI’s role in writing is nothing new. Even before chatbots like ChatGPT, AI began to play an increasingly prevalent role in our writing activities. During drafting, for instance, predictive text in word processing programs like Microsoft Word, email programs like Outlook, and texting software and apps suggest what words come next. As I write this chapter, for instance, Word predicts what word I am typing and what words could follow. It gives me the opportunity to push Tab to accept these suggestions. GenText AI also now offers an advanced Microsoft Word add-in that automatically generates, summarizes, and proofreads text. Moreover, once a text is drafted, programs like Grammarly offer automated writing corrections and suggestions based on acontextual grammatical rules and word associations. And for years, the squiggly colored lines of Word and other word processing programs have alerted writers to potential spelling and grammar errors and offered corrections at the click of a mouse. In this way, proofreading and editing have increasingly become the purview of AI. But with generative AI like ChatGPT, this influence has expanded to shape activities of writing processes prior to proofreading and editing, including invention, research, and drafting. This earlier intervention is what fuels much of the concern about ChatGPT. While textual arrangement and delivery have for years been outsourced, invention rarely has—until now.
The AI Conversation in Language and Literacy
Soon after its release, computers and writing scholars were quick to offer critical analysis of ChatGPT. In the May 2023 issue of Computers and Composition, for instance, Salena Sampson Anderson explained the value and limits of using metaphors such as tool and collaborator to frame students’ interaction with ChatGPT. However, while ChatGPT has become the face of generative AI that relies on LLMs, studies on other forms of AI in language and literacy scholarship are nothing new, even outside computers and writing. Before ChatGPT’s release, AI’s role in language and literacy instruction was already receiving heightened attention. Xinyi Huang et al. (2023), for instance, conducted a bibliometric analysis of 516 papers on language education published between 2000 and 2019 and found that attention to AI increased during this period. They discovered most articles in their corpus addressed AI tools that facilitate automated assessment of writing and tutoring systems for writing and reading.
Like the papers in Huang et al.’s (2023) corpus, scholarship in computers and writing has also studied AI tools that facilitate the automated assessment of writing. For example, Charles Moran and Anne Herrington’s (2001) foundational work on software for automated grading of writing raises concerns about authorship and audience. Via their analysis of WritePlacer Plus and Intelligent Essay Assessor, they argue that AI-driven scoring technology threatens teachers’ jobs, changes students’ conception of writing, and "defines writing as an act of formal display, not a rhetorical interaction between a writer and readers." They profess that AI that grades writing creates a new writing situation: "writers writing to computers" rather than on or with them (pp. 481, 496; italics in original). Generative AI applications like ChatGPT likewise create a new writing situation. With generative AI, however, the computer becomes the creator rather than just the audience. It handles invention, not just delivery and reception. This attention to writing for and by computer algorithms has continued, including in the 2020 special issue on "Composing Algorithms: Writing (with) Rhetorical Machines" (Beveridge et al.) and helpfully reinforces the rhetorical consequences of algorithms that drive AI (e.g., Gallagher, 2020; Crider et al., 2020), though this scholarship has yet to explicitly explore AI policy.
Scholars in computers and writing, of course, have a long history of studying AI and its role in writing production and evaluation. In fact, in the 1983 inaugural issue of Computers and Composition, Hugh Burns called for composition scholars, especially those designing software programs, to turn to the field of artificial intelligence. He was prescient both in predicting that "natural language processing and intelligent computer-assisted instruction" would be two areas of significant AI advancement and in reminding us that applications of AI in writing programs have "both good and bad consequences the humanistic composition teacher should consider" (p. 3). Forty years ago, Burns anticipated that in solving certain writing problems AI would introduce new ones. For him, this recognition did not mean turning away from AI but rather realizing that AI’s goal need not—and perhaps should not—be replicating the human brain (p. 4). While Burns does not go so far as to argue for policy, he offers views useful in drafting and implementing policy. From his perspective, for instance, the standard by which to judge generative AI’s effectiveness should not be how well it performs the activities of the human brain or replaces human behavior.
Historical responses to new writing technologies provide helpful context in understanding responses to ChatGPT and similar generative AI. As Dennis Baron (2000) reminds us, panicked reactions to new writing technologies are typical. Baron reports that we usually go through a cycle of response that includes concern and distrust before acceptance. He identifies five stages: The new literacy technology first has a "restricted communication function" available to only a select few; then that technology is used by a larger population to imitate previous literacy technologies. Next, the new technology is used in new ways that influence the older technology it used to imitate. Then opponents argue against these new uses of the technology as problems of fraud and misuse are made evident. Finally, proponents seek to demonstrate the "authenti[city]" and "reliability" of the new technology so it is more widely accepted (pp. 16–17). Especially pertinent for this study is stage four, when opponents bewail misuses of the technology. The policies analyzed for this study suggest that many journals and publishers are at stage four in their approach to ChatGPT at the time of this writing. Baron illustrates this stage by recounting how when erasers were first added to pencils, teachers worried students would become lazy and sloppy because they had the opportunity to erase mistakes (p. 31). Similarly, teachers now worry students will become lazy and dishonest because they can have ChatGPT generate prose for them. Academic journals and publishers worry scholars will, too.
Perhaps as a result, early responses to ChatGPT parallel initial responses to Wikipedia. As with Wikipedia, much of the concern about ChatGPT has focused on its use to create textual products—that is, as a tool that can write texts for people, especially students, rather than as a tool that people can use in their process of writing. Thus, as with Wikipedia, many initial responses have been to ban ChatGPT. Entire countries, including China, Cuba, Iran, Italy, North Korea, Russia, and Syria (Martindale, 2023; Satariano, 2023); school districts, including Los Angeles Unified School District, New York City Public Schools, and Seattle Public Schools (Johnson, 2023; Rosenblatt, 2023); and employers, including Accenture, Amazon, Bank of America, Citigroup, JPMorgan Chase, and Verizon (Sharma, 2023; Wodecki, 2023b), have banned its employees from using ChatGPT. Closer to academia, the International Conference on Machine Learning banned any papers including AI-generated content from their 2023 conference (Wodecki, 2023a). When this study was conducted, all but one of the journal and publisher policies analyzed for this study did not go so far as to ban the use of ChatGPT or similar generative AI. As I revise this chapter, none do. However, all forbid citing it as an author. Treatment of Wikipedia has largely changed over time, especially to transition from banning it to recognizing it as a potentially beneficial part of writing practices in ways beyond just a source to cite (Cummings, 2009; Purdy, 2009). Treatment of ChatGPT and other generative AI may do the same, to recognize the futility of banning it and the need to devise best practices and thoughtful policy.
Analyzing Submission Policies
For the study, I analyzed the content of 10 scholarly publisher and journal submission policies on use of ChatGPT and other artificial intelligence. As this is a preliminary study, I chose depth over breadth, focusing on a smaller set of policies. Table 2 lists the policies I read and the online locations where I accessed them.
Table 2: ChatGPT and AI Policies Studied
- Accountability in Research (ACL) Conference
https://2023.aclweb.org/blog/ACL-2023-policy/
- Computers and Composition (C&C)
https://www.elsevier.com/journals/computers-and-composition/8755-4615/guide-for-authors
- Elsevier
https://www.elsevier.com/about/policies/publishing-ethics/the-use-of-ai-and-ai-assisted-writing-technologies-in-scientific-writing
- Journal of the American Medical Association (JAMA)
https://jamanetwork.com/journals/jama/pages/instructions-for-authors
- Nature
https://www.nature.com/articles/d41586-023-00107-z
- Oxford University Press
https://academic.oup.com/pages/authoring/journals/preparing_your_manuscript/ethics#Authorship
- Proceedings of the National Academy of Sciences (PNAS)
https://www.pnas.org/post/update/pnas-policy-for-chatgpt-generative-ai
- Science Journals
https://www.science.org/content/page/science-journals-editorial-policies
- Taylor & Francis
https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/
- World Association of Medical Editors (WAME)
https://wame.org/page3.php?id=106
These ChatGPT and AI policies were often embedded in larger author guidelines or submission instructions. I attended specifically to the sections that address AI and ChatGPT. This was a convenience sample made up of policies published in the months following ChatGTP's release in late November 2022. Because I was interested in publishers' and journals' initial responses, the study corpus comprises policies I discovered through web searches from January to May 2023. These policies represent a range of disciplines to ascertain approaches to authorship across academic fields. However, the sample is not statistically representative or large enough to draw conclusions about scholarly publishing writ large or about particular academic disciplines. Still, the findings provide insight into ways in which scholarly journals and publishers (implicitly) construct authorship and writing in their policies on generative AI and ChatGPT.
In addition to paraphrasing how each policy defines authorship, I coded for whether each policy allows or forbids listing generative AI as an author and why. I also coded each policy for whether it allows or forbids inclusion of AI-generated text in submissions, under what conditions, and how such use is to be acknowledged. As a researcher interested in ways in which these policies discuss writing, I classified the policies’ references to writing according to James Britto's (1982) taxonomy of writing. According to Britton, writing can be classified as transactional, expressive, or poetic. These types of writing are distinguished by each's purpose (and the roles of participant and spectator).1 Transactional writing, for example, communicates information and can be informative, regulative, or persuasive. Expressive writing facilitates learning and construction of the self. It connects the abstract to personal experience. Finally, poetic writing presents writing as art. It is an object to be enjoyed for its aesthetic properties (p. 155). The policies analyzed for this study privilege the transactional. While perhaps unsurprising given the purpose of academic journals and publishers, this framing is limited.
Using these lenses for analysis, I found policies to limit authors to people, allow inclusion of some AI-generated content under certain conditions and citation guidelines, and frame writing as a textual product performing transactional functions.
AI as Author
p>All policies analyzed for this study agree that AI should not be listed as an author for an academic publication. All base this decision on how they define authorship. These policies assert authors are people defined by three characteristics: Authors possess integrity, they can assume responsibility for content accuracy, and they can be held accountable for research-writing decisions. These policies argue that because AI chatbots like ChatGPT cannot fulfill these criteria, they cannot be authors.The first characteristic is that authors are honest. The policies associate authorial activities with a desire for a text to have "integrity," a term used by six policies: Elsevier, JAMA, Nature, PNAS, Taylor & Francis, and WAME. They define authorship as being "accurate" (e.g., Elsevier), "original" (e.g., Science), and "aware" (e.g., ACL). These policies suggest that authors are people who care about the quality of the texts they write.
The other two characteristics these policies associate with authorship extend from this idea. Because authors have integrity, they take responsibility for the quality of the text and can be punished for failing to do so. The second characteristic is that authors assume accountability for the accuracy of textual material. In other words, according to these policies not only do authors desire to behave with integrity, but they also actively take responsibility for doing so. For example, arXiv explains authors must be able to take "full responsibility" for textual content. Similarly, Elsevier indicates that authors have the responsibility to ensure the "accuracy or integrity" and originality of their work. Following Elsevier’s policy, Computers and Composition, which is published by Elsevier, contends, "authors are ultimately responsible and accountable for the contents of the work." Similarly, JAMA affirms authors must be able to "take responsibility for the integrity of the content" generated by other tools. ACL explains that authors are people who have "cognition, perception, agency, and awareness" (p. 7). PNAS agrees, defining authorship as by the ability to take "responsibility for the paper."
The third characteristic follows from the second. Several policies advance that authors not only are people who can be held responsible for their decisions but also are people that can be punished for failing to do so. For Elsevier, being an author requires the "ability to approve" the text. That is, authors must have the capability to make a judgement about the suitability of the text for publication. According to Nature, taking "responsibility for the content and integrity of scientific papers" includes taking "legal responsibility" for those decisions. In other words, authors must be entities who can suffer legal consequences for including inaccurate content or violating sanctioned source use practices or standards of integrity. Similarly, PNAS explains authors must be able to "[b]e held accountable for the integrity of the data reported." For Taylor & Francis, being an author means being "accountable for the originality, validity and integrity" of a publication. The ability to be held accountable starts with the ability to consent to publication and ends with suffering punishment for publishing flawed content, which these policies assert ChatGPT cannot do.
Though not all publisher and journal AI policies explicitly mention all three characteristics, these characteristics are deeply intertwined in the policies. According to these policies, because authors have integrity, they take responsibility for the accuracy and originality of their writing and can be punished for violations. As WAME puts it, authors ultimately (must) have the ability to "understand" what it means to be an author. Taken together, these policies suggest being an author requires a level of metacognition. Authors must know they are authors and understand the ramifications of their authorial decisions.
Inclusion of Generated Content
Similar to their consistent position on generative AI not qualifying as an author, the policies analyzed for this study widely agree about when including AI-generated material is acceptable. At the time of this research, all but one policy allowed for including AI-generated content in submitted texts. Only Science Journals forbade the use of AI when writing an article for its journals. It declared authors must be capable of producing work that is “original,” and generative AI chatbots like ChatGPT do not produce original content. Like other publication venues, however, Science now permits authors to submit articles with content from generative AI, if that use is cited appropriately. All other policies permit the use of AI to generate content, if that use is cited appropriately.
While not all policies specified how uses of AI should be cited, those that did offered guidelines for appropriate citation. These guidelines entail explicitly acknowledging use of generative AI, usually in the methods section or acknowledgements of an article or chapter. Table 3 provides particulars from the policies studied.
Publisher / Journal | How to Acknowledge Use of AI |
---|---|
Accountability in Research (ACL) Conference | “[E]laborate on the scope and nature of their use.” |
arXiv | Report use in ways “consistent with subject standards for methodology.” |
Disclose use by “adding a statement at the end of their manuscript in the core manuscript file, before the References” titled “Declaration of Generative AI and AI assisted technologies in the writing process.” | |
Elsevier | “[I]nsert a statement at the end of their manuscript, immediately above the references, entitled ‘Declaration of AI and AI-assisted technologies in the writing process.’” |
Journal of the American Medical Association (JAMA) | Provide a “clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer.” |
Nature | Document use “properly” in the methods section or elsewhere, if the text has no methods section. |
Oxford University Press | “[D]isclose” use in the methods or acknowledgements section and in the cover letter to the editors. |
Proceedings of the National Academy of Sciences (PNAS) | “[C]learly acknowledge” use in the materials and methods section or elsewhere if no such section is part of the text. |
Science Journals | AI-generated text is not allowed. |
Taylor & Francis | “[A]cknowledge” and “document” use “appropriately.” |
World Association of Medical Editors (WAME) | “[D]eclare this fact and provide full technical specifications of the chatbot used (name, version, model, source) and method of this application in the paper they are submitting (query structure, syntax).” |
Taken together, these policies dictate that readers are to be made aware of exactly what textual content was written by AI, what kind of AI wrote that content, what input led to that content, and why it was generated. Rather than provide specific guidelines about how to do so, Taylor & Francis and arXiv defer to disciplinary conventions for what these citation gestures comprise and where they appear. This approach suggests AI documentation practices may differ across fields as they develop, an important consideration for postsecondary institutions as they work to craft or update their own academic integrity policies in response to generative AI.
Writing as a Product
As part of their policies about generative AI, these publishers and journals offer frames for what writing is and does. Most policies analyzed for this study treat writing as a product. They focus on the final text submitted for publication with attention to the accuracy (Elsevier, WAME) and correctness (C&C, WAME) of its content. Their concern is that an error generated by ChatGPT or another generative AI, be it an error in language, content, source, or citation, will not be caught and will be included in the final publication. In other words, these policies are motivated by a desire to prevent the published version of a text from being marred by error, and they place the responsibility for avoiding such error on the author (rather than, say, on themselves to citation check).
Two policies, however, mention writing as a process in explaining why ChatGPT cannot be listed as an author. According to Oxford University Press, for instance, authorship entails "significant contribution to the design and execution" of a study. Being an author entails more than writing the final publication. It requires planning and performing the research. Oxford UP’s policy argues that ChatGPT cannot be an author precisely because it contributes only to the textual product. ACL likewise asserts, "participation in the writing process is a requirement for becoming an author" (p. 3). Its explanation for why AI cannot be listed as an author is based in its assertion that authors must contribute to the writing process, not just the product. ACL, moreover, recognizes multiple ways generative AI may be used throughout the writing process, ranging from assistance with "paraphrasing" and "polishing" an author’s "original content" to producing "low-novelty" templated text to conducting literature searches to generating new ideas and new textual material. According to ACL, writing entails multiple activities during which generative AI might intervene. It favors using AI for the former activities rather than the latter two.
Computers and Composition, perhaps not surprisingly as the journal from the study corpus most connected with English studies, framed its policy as the most process oriented. The first sentence of C&C’s policy reads, "The below guidance only refers to the writing process." C&C declares that "authors should only use these [AI] technologies to improve readability and language." In this way C&C frames the writing process as the purview of humans and presents AI’s role as to edit the text after it has been written. Process is for people; product is for machines.
Writing as Transactional
In addition to addressing writing primarily as a product, the publisher and journal policies analyzed for this study present writing, in Britton’s (1982) terms, as transactional rather than expressive or poetic. Such policies frame writing’s role as delivering information, as fulfilling a transaction between sender and receiver. Their main concern is that generative AI can communicate incorrect or biased information. They endeavor for that information to be accurate, as well it should be. While unsurprising given the purpose of academic journals and publishers, this framing is limited. These policies do not discuss writing as something to be studied for its aesthetic qualities or something that fosters idea development and self-reflection. An exception is WAME (World Association of Medical Editors), which warns that "the mere fact that AI is capable of helping generate erroneous ideas makes it unscientific and unreliable, and hence should have editors worried."
These policies focus not only on text accuracy, but also on text readability. Indeed, the main role they support for generative AI like ChatGPT is to enhance the readability of the text. For example, C&C’s policy explains, "authors should only use these [generative AI] technologies to improve readability and language" (p. 5). While readability connects to issues of style and can thereby connect to aesthetics, these policies address readability in terms of comprehension, of making the text understood to its audience. At its most extreme, this suggested use of generative AI offers style as something that can be outsourced, privileging writing as completing a transaction. As computers and writing scholars know, such a separation is deeply engrained in web authoring principles (e.g., the separation of html and css in computer programming) but is not always easy or possible in practice.
That these are policies for academic journals and publishers clearly leads, in part, to their focus on transactional writing. Literary, creative writing, or other kinds of journals might place more emphasis on the expressive or poetic. Moreover, I, like Britton (1982), may be guilty of separating these functions too artificially. Writing is never simply just transactional or just expressive or just poetic. Still, these policies perhaps unwittingly perpetuate the problem to which Britton responded: the need to recognize that writing also has expressive and poetic functions. Misinformation is not the only consequence of ChatGPT writing for people. Consequences also include a limited notion of writing itself and the ways in which the tools and technologies of writing inevitably shape the text that is produced.
Though not included in the corpus for this study, as it is a statement from a professional organization rather than a policy of a publisher or journal, the Association for Writing across the Curriculum (AWAC) published a statement that is noteworthy in offering a response to ChatGPT that discusses writing differently than most of the journal and publisher policies. AWAC explains the loss to learning when generative AI writes for people: "writing is ‘a fundamental means to create deep learning and foster cognitive development’; it is 'an intellectual activity that is crucial to the cognitive and social development of learners and writers." In this way, AWAC presents the stakes of generative AI’s intervention in writing differently. Its concern is less the possibility for error in textual products and more the possibility of a loss in learning and knowledge production when people spend less time doing the "intellectual activity" of writing. AWAC and the policies analyzed for this study have somewhat different purposes, of course. AWAC focuses more directly on pedagogy; the journal and publisher policies in the study corpus focus more directly on scholarship. Still, AWAC provides an alternative response to generative AI like ChatGPT that journal and publisher policies might consider. These policies, in other words, might lament less the possibility of getting in trouble for publishing flawed content and more the possibility of generative AI outsourcing the intellectual work of scholarly writing.
Conclusion
This analysis of journal and publisher AI policy statements in response to ChatGPT reveals that we as computers and writing teacher-scholars still have work to do to circulate more broadly the notion that the technology of writing makes meaning. The policy statements analyzed for this study reinforce that writing is an ethical, human activity. They also construct writing primarily as a transactional activity and as a product to be assessed. These policies center on the concern that generative AI like ChatGPT will fabricate incorrect data or introduce errors that human authors will not review, find, and correct—in other words, that the written product will be flawed. This concern is well founded to be sure.
But that is not the only—or even the most concerning—problem. With a few exceptions, missing from these policies is what intellectual growth and knowledge production are lost when AI writes our prose. They focus on what happens to the writing we create over what happens when we no longer create our writing.
Incumbent on us as computers and writing teacher-scholars is to seize this opportunity to evangelize what we already know well: writing makes meaning in the world and that meaning is shaped by the tools and technologies of writing. It is the process of writing that generates new knowledge and the product of writing that shares that knowledge with others. Academic journals and publishers would do well to promote this view in their policies on generative AI. Along with lamenting the possibility of getting in trouble for publishing flawed content, they should lament the possibility of generative AI outsourcing the intellectual work of scholarly writing—for ourselves, our discipline, and our intellectual property.
Furthermore, given their classification of chatbot text as generated by a writing tool rather than a human author, these policies might also reinforce that all writing tools, including but not limited to generative AI, should be identified or cited in a text. For instance, such tools might be referenced as part of methods sections, particularly for scientific writing that conventionally lists the materials used for research. Though members of the computers and writing community often position them as objects of analysis for their work, they rarely describe explicitly what word processing software, citation managers, image editors, apps, or other writing tools and technologies they used to compose, create, deliver, and circulate their texts. Perhaps they should. Doing so would make such technologies more visible—and reinforce that writing itself is one of many technologies on which textual production depends. Now generative AI will increasingly be added to that list.
This study is limited by its attention to a convenience sample of a small number of publisher and journal policies published shortly after ChatGPT’s public release. Conclusions cannot be drawn about all such policies. Future work could compare and contrast later journal and publisher AI policies with the early policies studied here to identify to what extent they have changed and consider what those changes mean for our evolving understanding of authorship in a world of generative AI. Future work might also study additional policies to determine whether the views identified in this chapter represent prevailing perspectives. Still, this chapter provides a starting place for understanding the policies that regulate academic publication and what they say about major foci of our subfield, computers and writing and writers.
References
Anderson, Salena Sampson. (2023). “Places to stand”: Multiple metaphors for framing ChatGPT's corpus. Computers and Composition, 68, 1–13. doi:10.1016/j.compcom.2023.102778
Barnett, Sofia. (2023, January 30). ChatGPT is making universities rethink plagiarism. Wired. https://www.wired.com/story/chatgpt-college-university-plagiarism/
Baron, Dennis. (2000). From pencils to pixels. In Gail E. Hawisher and Cynthia L. Selfe (Eds.), Passions, pedagogies, and 21st-century technologies (pp. 15–33). Utah State UP and National Council of Teachers of English.
Bear, Julia B., & Collier, Benjamin. (2016). Where are the women in Wikipedia? Understanding the different psychological experiences of men and women in Wikipedia. Sex Roles, 74, 254–265. doi:10.1007/s11199-015-0573-y
Beveridge, Aaron, Figueiredo, Sergio C., & Holmes, Steve. (Eds.). (2020). Composing algorithms: Writing (with) rhetorical machines. Computers and Composition, 57.
Britton, James. (1982). Spectator role and the beginnings of writing. In Marty Nystrand (Ed.), What writers know: The language, process, and structure of written discourse (pp. 149–169). Academic.
Burns, Hugh. (1983). A note on composition and artificial intelligence. Computers and Composition, 1, 3–4.
Chen, Lan, Chen, Xi, Wu, Shiyu, Yang, Yaqi, Chang, Meng, & Zhu, Hengshu. (2023, April 20). The future of ChatGPT-enabled labor market: A preliminary study. arXiv. doi:10.48550/arXiv.2304.09823
Cooper, Kindra. (2021, November 1). OpenAI GPT-3: Everything you need to know. Springboard, 1. https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/
Cotton, Debby R. E., Cotton, Peter A., & Shipway, J. Reuben. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. doi:10.1080/14703297.2023.2190148
Crider, Jason, Greene, Jacob, & Morey, Sean. (2020). Digital daimons: Algorithmic rhetorics of augmented reality. Computers and Composition, 57, 1–17. doi:10.1016/j.compcom.2020.102579
Cummings, Robert. (2009). Lazy virtues: Teaching writing in the age of Wikipedia. Vanderbilt UP.
Cuthbertson, Anthony. (2023, April 5). Germany considers ChatGPT ban. Independent. https://www.independent.co.uk/tech/chatgpt-ban-germany-ai-privacy-b2314487.html
Dehouche, Nassin. (2021). Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3). Ethics in Science and Environmental Politics, 21, 17–23. doi:10.3354/esep00195
Gallagher, John R. (2020). The ethics of writing for algorithmic audiences. Computers and Composition, 57, 1–9. doi:10.1016/j.compcom.2020.102583
Geher, Glenn. (2023, January 6). ChatGPT, artificial intelligence, and the future of writing. Psychology Today. https://www.psychologytoday.com/us/blog/darwins-subterranean-world/202301/chatgpt-artificial-intelligence-and-the-future-of-writing
Gruwell, Leigh. (2015). Wikipedia’s politics of exclusion: Gender, epistemology, and feminist rhetorical (in)action. Computers and Composition, 37, 117–131. doi:10.1016/j.compcom.2015.06.009
Heaven, Will Douglas. (2023, April 6). ChatGPT is going to change education, not destroy it. MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
Heikkilä, Melissa. (2023, February 7). Why detecting AI-generated text is so difficult (and what to do about it). MIT Technology Review. https://www.technologyreview.com/2023/02/07/1067928/why-detecting-ai-generated-text-is-so-difficult-and-what-to-do-about-it/
Herrington, Anne, & Moran, Charles. (2001). What happens when machines read our students’ writing? College English, 63(4), 480–499.
Huang, Xinyi, Zou, Di, Cheng, Gary, Chen, Xieling, & Xie, Haoran. (2023). Trends, research issues and applications of artificial intelligence in language education. Educational Technology & Society, 26(1), 112–131. doi:10.30191/ETS.202301_26(1).0009
Hussain, Suhauna. (2009). Reddit may finally be cracking down on hate. Los Angeles Times. https://enewspaper.latimes.com/infinity/article_share.aspx?guid=3bb2c9e0-8bed-407c-baf4-d1597f253078
Johnson, Arianna. (2023, January 31). ChatGPT in schools: Here’s where it’s banned—And how it could potentially help students. Forbes. https://www.forbes.com/sites/ariannajohnson/2023/01/18/chatgpt-in-schools-heres-where-its-banned-and-how-it-could-potentially-help-students/
Kerner, Sean Mchael. (2023, April). Large language model (LLM). TechTarget. https://www.techtarget.com/whatis/definition/large-language-model-LLM
Kinneavy, James E. (1969). The basic aims of discourse. College Composition and Communication, 20(5), 297–304. doi:10.2307/355033
Lam, Shyong K., Uduwage, Anuradha, Dong, Zhenhua, Sen, Shilad, Musicant, David R., Terveen, Loren, & Riedl, John. (2011). WP:clubhouse? An exploration of Wikipedia’s gender imbalance. WikiSym ’11: Proceedings of the Seventh International Symposium on Wikis and Open Collaboration, 1–10. doi:10.1145/2038558.2038560
Mak, Aaron. (2018, April 12). Reddit CEO clarifies that racism isn’t ‘welcome’ on site, even though it’s allowed. Slate. https://slate.com/technology/2018/04/racist-speech-not-banned-on-reddit-ceo-claims.html
Martindale, Jon. (2023, April 12). These are the countries where ChatGPT is currently banned. Digital Trends. https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/
Newton, Derek. (2023, January 24). The big, profitable education race to detect ChatGPT. Forbes. https://www.forbes.com/sites/dereknewton/2023/01/24/the-big-profitable-education-race-to-detect-chatgpt/
Purdy, James P. (2009). When the tenets of composition go public: A study of writing in Wikipedia. College Composition and Communication, 61(2), W351–373. https://www.ncte.org/library/NCTEFiles/Resources/Journals/CCC/0612-dec09/CCC0612When.pdf
Rosenblatt, Kalhan. (2023, January 5). ChatGPT banned from New York City Public Schools’ devices and networks. NBC News. https://www.nbcnews.com/tech/tech-news/new-york-city-public-schools-ban-chatgpt-devices-networks-rcna64446
Satariano, Adam. (2023, March 31). ChatGPT is banned in Italy over privacy concerns. The New York Times. https://www.nytimes.com/2023/03/31/technology/chatgpt-italy-ban.html
Sharma, Mukul. (2023, March 22). ChatGPT ban? Companies formulate new policies to regulate use of artificial intelligence. WION News. https://www.wionews.com/technology/chatgpt-ban-companies-formulate-new-policies-to-regulate-use-of-artificial-intelligence-574401
Weissman, Jeremy. (2023, February 8). ChatGPT is a plague upon education. Inside Higher Ed, https://www.insidehighered.com/views/2023/02/09/chatgpt-plague-upon-education-opinion
Wodecki, Ben. (2023, January 6). ChatGPT and AI text generator tools banned by ML event. AI Business. https://aibusiness.com/nlp/chatgpt-and-ai-text-generator-tools-banned-by-ml-event
Wodecki, Ben. (2023, February 24). JPMorgan joins other companies in banning ChatGPT. AI Business. https://aibusiness.com/verticals/some-big-companies-banning-staff-use-of-chatgpt
Journal and Publisher AI Policies in Corpus
Boyd-Graber, Jordan, Okazaki, Naoaki, & Rogers, Anna. (2023). ACL 2023 policy on AI writing assistance. Accountability in Research (ACL) Conference. https://2023.aclweb.org/blog/ACL-2023-policy/
arXiv. (n.d.). Policy for authors’ use of generative AI language tools. Arxiv Content Moderation. https://info.arxiv.org/help/moderation/index.html#policy-for-authors-use-of-generative-ai-language-tools
Computers and Composition. (2023). Declaration of generative AI in scientific writing. Guide for Authors. https://www.elsevier.com/journals/computers-and-composition/8755-4615/guide-for-authors
Elsevier. (2023). The use of AI and AI-assisted writing technologies in scientific writing. https://www.elsevier.com/about/policies/publishing-ethics/the-use-of-ai-and-ai-assisted-writing-technologies-in-scientific-writing
Journal of the American Medical Association. (2023). Authorship criteria and contributions. JAMA Network. https://jamanetwork.com/journals/jama/pages/instructions-for-authors
Oxford University Press. (n.d.). Authorship. Ethics. https://academic.oup.com/pages/authoring/journals/preparing_your_manuscript/ethics#Authorship
Proceedings of the National Academy of Sciences. (2023, February 21). The PNAS Journals outline their policies for ChatGPT and generative AI. https://www.pnas.org/post/update/pnas-policy-for-chatgpt-generative-ai
Science Journals. (2023). Authorship. Science Journals: Editorial Policies. https://www.science.org/content/page/science-journals-editorial-policies
Stokel-Walker, Chris. (2023, January 18). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613(620–621). doi:10.1038/d41586-023-00107-z
Taylor & Francis. (2023). Taylor & Francis clarifies the responsible use of AI tools in academic content creation. https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/
World Association of Medical Editors. (2023, May 31). Chatbots, generative AI, and scholarly manuscripts. https://wame.org/page3.php?id=106