AI-generated triptych: On the left is a black-and-white drawing of two men facing each other at a table; the man on the right has no hands, although arm ends in what appears to be a pen. The background of this left-most image is an unreadable manuscript of some kind. In the center is a photorealistic image of a white man with glasses and a white woman discussing a page in front of them on the left; on the right, a blurry image of a flyer with (unreadable) dark blue text on a beige background with cartoon-like pictures of a man working on a laptop and sketch of a robot in the margin.

Drafting a Policy for Critical Use of AI Writing Technologies in Higher Education

Daniel Frank and Jennifer K. Johnson University of California Santa Barbara

Introduction

In light of increasing calls for clear policy to guide the mediation and use of AI writing tools and large language models (LLMs) such as ChatGPT in higher education and writing studies in particular, we, Dr. Daniel Frank and Dr. Jennifer Johnson—two writing teachers in the UCSB Writing Program—present a dialectical conversation reflecting our evolving thoughts, arguments, and pedagogical approaches to the incorporation of LLM technology in the writing classroom, which culminated in the writing and publishing of the UCSB Writing Program's Policy on LLMs and AI Writing. This policy addresses the conscious and critical use of AI writing technologies in writing and research practices, which is increasingly important as AI tools gain prominence in education. The policy aims to provide transparency, ethical guidance, and support for faculty and students in incorporating LLM tools in their work.

In this chapter, we reflect on the goals, intentions, and critical considerations at play during the policy's drafting process, which included two faculty workshops that yielded a spectrum of perspectives about the challenges and transformative potential of LLMs (Adiguzel et al., 2023; Rudolph et al., 2023.; Wu, 2023) and the need for ethical and critical use of AI technologies in the classroom (Halaweh, 2023; Kasneci et al., 2023; Wu, 2023). We offer examples of how we have incorporated this policy and talked about this technology in our classes, facilitated critical discussion about the technology with our students, and guided them in applying AI tools ethically and transparently in their writing.

This chapter ends with an embedded video that presents media from a series of brief faculty interviews. Their perspectives on the integration of AI writing tools into the classroom are highlighted through short clips.

A Path to Policy

In this section we write and quote articles specifically about the launch of ChatGPT, which was the precipitator of the conversation at the time. Much of the thinking here was shaped in response to the specific release of OpenAI's ChatGPT model 3.5 in 2022. Currently, there is a wide range of other Large Language Models. We invite you to read any references to ChatGPT as pertaining equally to other LLMs.

Dan:

My research focus and pedagogical grounding never was too far out of orbit from the conversations that the ChatGPT classroom question provoked. Both personally and professionally, I’ve always been interested in the 'new thing.' I was always a computer nerd and I grew up side-by-side with the internet: when I was a kid, my dad dialed me in to nascent internet platforms CompuServe and Prodigy, where I read and shared stories on early-stage internet bulletin boards. I learned to imagine, share, code, communicate, and collaboratively play on online MOOs and MUDs—these were essentially text-based, virtual, online worlds. In my undergraduate career, Myspace and AIM were all the rage. I really got to see the internet as we know it take shape right as I was starting off as a writing tutor and learning the basics of composition and teaching. I started building the connections: I saw the internet as a vibrant space for sharing, thinking, collaborating, learning, and writing.

That was fleshed out as I worked on my master’s thesis and read the work of Mimi Ito (2010), Troy Hicks (2010), Cindy Selfe (2007), and Kathleen Yancey (2009). Together they painted a picture of the novel ways kids were working, learning, and collaborating on the internet. The "net generation" was sharing, creating, and collaborating by making mods for their favorite videogames, or remixing YouTube videos, or participating in fan-fiction storytelling communities, and in doing so they were learning, and they were doing so on values completely alien to the assumptions of traditional kill-and-drill pedagogies. Learning on the internet was bottom-up, communicative, collaborative, passion-driven, and multimodal, and in case after case, students who were ignored by the traditional educational apparatus found their voices, passions, and skills through the endless discourses made possible on the net. My guiding research question became: How do students work, learn, and create on the internet, what values inform that growth, and how can we tap into those values in our own classrooms? My eventual dissertation (2018) combined these pedagogical questions with Seymour Papert's theory of Constructionism (1991), which argues that learning best happens in unscripted little ways as students engage in "passionate affinity spaces," environments of constructive play and experimentation on projects that capture their interests and passions. I detailed a pedagogy I called "Microworld Writing," which was inspired from the time I spent writing online and with others in the MUDs and MOOs I grew up in. So, all that is to say, I was already keyed in to the intersections of learning, technology, and computer-mediated writing. The boom in AI writing was right in my wheelhouse.

When ChatGPT was released in the Winter of 2022, it ignited the discourse amongst teachers all over the world. What is this technology? What can it do? How will we teach our students if they can just have this "AI" do all of their work for them? Was this the death of the essay? Of homework? Of education altogether? This was a moment that needed a response.

I had been keeping an eye on GPT-powered writing in the months leading up to the release of ChatGPT. From 2020 to 2022, Large Language Model (LLM) AI Writing Technology was generally limited to producing blogs and copy for marketing material. The technology was limited to a few startups such as Jasper.ai, Copy.ai, and Writesonic. People were also directly playing with GPT models in OpenAI’s playground, and there were communities involved in sharing prompts and settings to make the technology work. OpenAI’s GPT-3 model, released in 2020, marked a great advancement in the technology. The only thing that I saw holding it back was interface; one had to have a good deal of computer literacy to really coax anything valuable out of these models. I noted at the time that if a powerful interface came around that made thinking about and working with this technology much more transparent, this could really take off. I warned a colleague at that time that I foresaw that AI writing was coming, and as writing teachers, we were going to have a lot to deal with.

I was right on both counts. It turned out that all that was really needed for GPT-3 to culturally take off was an accessible interface. That interface turned out to be conversational, alongside a moderate advancement of the model from GPT 3 to GPT 3.5. The release of ChatGPT was quite simple: it allowed people to interface with the GPT-3.5 model through conversation. Indeed, once it became clear what could be produced by simply talking to the bot, the news of this technology spread like wildfire. Having been carefully following the discourse from day one, I knew I was in a position to help my colleagues get a sense of this technology, what it meant, and how to approach it.

Takes at that time ranged from amazed to terrified. My own experiences matched some of these early reactions. I remember the feeling of awe as I asked ChatGPT for a full syllabus for a class I wanted to run the next quarter. As paragraphs of writing flowed in, complete with schedules and even a reading list, a chill ran down my spine. However, after more careful reading, I could recognize the simple, over-structured prose that GPT writing tends to produce, and I definitely started to see the cracks and the seams when I tried to look up the reading list it seemingly miraculously offered. The authors were real and the right names for the field, but the books offered in this list didn’t actually exist. Here I had a long conversation with a colleague of mine, Dr. Nathan Riggs, who had been working with and thinking about AI within the humanities for years. "The issue here is semantics vs. syntax," he told me. "This is doing the latter, but it can’t do the former. You should think of it as little more than souped-up Mad Libs."

A reading from Ian Bogost (2022) at that time also helped me calibrate my approach. In his article, "ChatGPT is Dumber Than You Think," Bogost put forward the idea that this technology may produce the image of intelligence, but it was just putting together language patterns without a deeper understanding of what it was writing. Bogost had found an array of ways that the writing produced by this technology left much to be desired—while fluent, he argued, the writing was formulaic, prone to errors, and perhaps worst of all, boring. The key, then, would be in asking our students for more than what the AI can currently give.

This advice turned out to be challenged as the technology grew in complexity, accuracy, and flexibility over the following months. But Bogost left his piece on a closing note that really helped shape my approach to and conceptualization of the tool: Bogost suggested viewing ChatGPT as an aesthetic instrument for manipulating, playing with, and thinking about language, not for answering questions. It can probe the "textual infinity" of digitized information:

GPT and other large language models are aesthetic instruments rather than epistemological ones. Imagine a weird, unholy synthesizer whose buttons sample textual information, style, and semantics. Such a thing is compelling not because it offers answers in the form of text, but because it makes it possible to play text—all the text, almost—like an instrument. (Bogost, 2022)

ChatGPT, he argued, wasn’t an "intelligence"—it was a language playspace, turning the stuff of our discourse into algorithms that can be swapped, experimented with, combined, and remixed. This idea formed the core of how I started to talk about this technology in my classroom.

I sent an email to my colleagues over our listserv. I wanted to give them a primer to help them understand what this tool was and what it wasn’t. In the ‘What can it do’ section, I spoke of excited tweets that showed the technology producing what seemed to be at the time marvels. This was important: this really was something powerful, and new, and exciting, and it couldn’t be simply dismissed. But even more importantly, I needed to share the ways that this technology was limited. It was there that we as teachers would need to carve out our educational approach. The primer initiated a passionate and continuing discussion across the department.

Jennifer:

Unlike Dan, I am not an early adopter of new technology; in fact, friends and relatives have on more than one occasion jokingly accused me of being a bit of a Luddite. When first the web and later social media came on the scene, I was initially deeply suspicious of these tools and how they might impact human activity and connection. While I did eventually dabble with Myspace and was pleased when it enabled me to reconnect with old school mates, I was hardly a steady user of the platform. It wasn’t until the rise of Facebook that I really became invested in the possibilities of social media and began to be entangled by it, albeit still as a relatively casual user.

Yet in early December of 2022, I happened upon Stephen Marche's "The College Essay is Dead" published in The Atlantic and my curiosity about generative AI was piqued. I wondered if Marche could really be right and if my career teaching university writing, which I spent so long preparing for and which I deeply love, was indeed facing an existential threat. Was it really possible that these tools could eclipse writing instruction?

When Dan first started sharing his thoughts with our department about how LLMs such as ChatGPT could (would?) upend both our teaching and our world, I became captivated. It was apparent to me that both Marche and Dan were right about these tools' potential, and I could clearly see that I would need to modify my teaching as quickly as possible to account for them. But Dan's emails to our faculty and Ian Bogost's "ChatGPT Is Dumber than You Think" (2022) assured me that, given the way in which the models function—by continuously selecting the next most likely word—they were limited in their capabilities.

I quickly concluded that the models were constructing texts without giving much attention to audience, purpose, or context, which we in writing studies regularly teach our students are necessary considerations in the development of rhetorically effective prose. Despite some of the worries expressed in the popular press and by colleagues that these tools would enable students to bypass their work entirely, I noted to a colleague that it did not seem likely to me that they could produce texts that would pass muster in our UC classrooms. At the same time, I could see the tools’ potential for inviting students to grapple with rhetorical concerns. My sense of these tools' pedagogical potential became the basis of my thinking about how I might incorporate generative AI in my teaching.

Still, my experience was opposite of Dan’s in that I did not anticipate the sudden entrance of LLMs into everyday life AT ALL. Prior to Dan's emails to our department and the sudden onslaught of articles in the popular press about the perils and promises of this new technology, I had no clue that LLMs were even a thing or that they had been in development. But once they became publicly available in November 2022, I quickly began reading everything that crossed my radar about them and set myself to considering how or to what extent I could use them in my classes. While I was admittedly alarmed by the suggestions that they would run my colleagues and me out of a job and that students would no longer need to learn to write, I mostly focused on the possibilities for learning and engagement that it seemed they would offer.

Also unlike Dan, I did not immediately test the tools for myself. In fact, for as much familiarity as my reading offered me about ChatGPT, it was several months before I actually tried it out. Once I did, I became even further intrigued, both by its abilities and its limitations. It became apparent to me that in addition to its inherent inability to create new ideas or textual constructions, users would have to carefully construct prompts in order for it to produce anything valuable. My initial instinct, therefore, was to try and show my students the ways in which the tools could not produce effective texts—at least not without a lot of time and effort put into both prompt engineering and revising whatever the tools would spit out.

While at this point I was not yet ready to invite my students to embrace ChatGPT in their work for my classes, I was starting to name the elephant in the (class)room, especially after reading Owen Kichizo Terry’s "I’m a Student. You Have No Idea How Much We Are Using ChatGPT" in the May 2023 Chronicle of Higher Education. I showed my classes samples of AI-generated text—including a relatively clunky course syllabus that I used ChatGPT to create—and invited them to consider to what extent these texts were achieving their rhetorical goals. In every class I teach, I introduce students to the notion of the rhetorical triangle and the relationship between purpose, audience, and writer, and in spring of 2023 I realized that conversations about LLMs in fact lent themselves to these discussions. Along these same lines, and in anticipation of my students’ and my own further engagement with these tools down the road, I also began suggesting that my students test them out for themselves.

When I broached the topic, some of my students readily admitted to using LLMs for their classes, but most indicated that they either had not used them at all or that they had only played around with them for fun. Interestingly, a majority of my first-generation college students told me that they had not even heard of ChatGPT or any of the other large language models, which seemed ironic, given the warning that Liang et al. (2023) and others have given us about this demographic being unfairly accused of using generative AI when they submit effective texts in their coursework.

All of this is to say that when Dan began reaching out to our department to share with us what these tools could and could not (yet) do, I was open and ready for his input, and I found myself wanting to learn and know more. Because I tend to avoid embracing cutting-edge technology, this position was both new to me and somewhat surprising. But I became intrigued by the ways in which this particular tech could spark rich conversations about textual construction / effectiveness, and in turn I became eager to explore it further, both within and outside of the classroom.

Dan:

The primer I created and shared was well received and my colleagues appreciated the ideas to help orient them through the contours of this technology. I knew even then that the core assumption of my approach—that we could find pedagogical exigency in the weak spots of GPT output—wasn't future-proof. I had already seen ChatGPT advance in capability and complexity as it moved into its 3.5 model and was under no illusions that that advancement would for some reason stop there. Indeed, it didn’t; after a few months, OpenAI released its GPT-4 model, which boasted advancements across the board: more complex and fluent writing, better context awareness, fewer hallucinations. In addition, competitors and sibling models joined the race. Microsoft’s Bing Chat was revealed to use a version of the GPT-4 model but also combine internet searches in its input and output. Google announced its Bard model (later rebranded as Gemini). Anthropic's Claude also joined the fray. Each bot brought an array of specialties and served to advance the realm of what is possible.

Even so, at this time of writing, I still don't find an unmediated approach—just asking, for instance, for a paragraph or a whole essay without establishing a thorough and rhetorical context—to produce anything that's worth reading, even with steady increases in the technology's capabilities. The pedagogical exigency still exists. An interesting potential pedagogical side effect of the fact that basic, uninteresting, formulaic writing can perhaps now be automated is that we no longer have to settle for it in our classes. We can aim at more. Indeed, over the quarter following ChatGPT's appearance, I found myself able to—needing to—explore higher level concerns in my classrooms. I spent more time talking about style, pacing, and tone. I spent more time discussing with my students what it means to write, to develop a voice, to hone ideas, to contribute to the conversations and the discourses around them. I was in a position to be pickier and more complex in my feedback, as well. For instance, when a student turned in a paper that I suspected used a heavy reliance on ChatGPT, I pointed out what felt to me a ‘clunkiness’ of adjectives and use of a somewhat overwrought tone. The student appreciated my frank and specific feedback, and his final draft was much better for it: the tone was much more developed. It felt complex, specific, unique, authentic.

I shared that anecdote with a colleague at the Computers and Writing conference last June, and in reply he mused, "Well, we still don’t know to what extent he still used ChatGPT in the final product, but it’s clear that something valuable happened there."

Something valuable: we’re at a point now where we might have to go back to the drawing board on some of our fundamental pedagogical assumptions. We have to look back at the very question of what it means to learn. Where do we draw the lines between expecting our students to hold knowledge, by themselves, and knowing how to find, produce, evaluate, articulate, and/or work with knowledge, utilizing the ranges of tools available to them? What do students need to be able to know, and do, in what contexts? What is the range of skills required to be a critical, communicative, effective, and creative individual in the 21st century? Wu (2023) argues that the rise of generative AI like ChatGPT necessitates re-examining fundamental assumptions about teaching and learning. As Wu states, "it is important to acknowledge the ongoing transformations raised by ChatGPT, which is rapidly revolutionizing the process of learning and teaching. With its quiet yet profound impact, Generative AI is subtly influencing the trajectory of education’s future" (p. 6). Mogavi et al. (2023) concur that educators must reconsider learning goals and objectives in light of AI tools like ChatGPT, stating "As students increasingly rely on AI tools for support, educators must ensure that learning outcomes harmonize with this evolving landscape" (p. 46). Chan and Hu (2023) agree that the rise of generative AI necessitates rethinking policies, curricula and teaching approaches, arguing that "higher education institutions should consider rethinking their policy, curricula and teaching approaches to better prepare students for a future where GenAI technologies are prevalent" (p. 14).

These questions aren't optional. It's not enough to simply push the technology aside because that approach fails to consider the flexibility of the tool and the range of its outputs. While interaction with this tool could be minimal and a student could ask, for example, for an entire essay, it needs to be understood that that is just one (ineffective) way of interacting with the technology. If we are guided by Bogost’s closing idea about the tool as an algorithmic language playspace, we can realize that the tool can be used at nearly any level of rhetorical and authorial consideration, as through inputs and outputs, students can interrogate and develop writing across genres, tones, and voices, at the level of paragraph, sentence, or even word by word. Such an approach was only possible, however, if both teachers and students came to understand that the tools are not actually "artificial intelligences," no matter how impressive initial experimentation with the tool might be. There is no deeper awareness beneath the output of the tool, no critical consideration or rhetorical intent: that has to come from the student. That can still be—still needs to be—exercised and developed by the student.

Secondly, detection doesn’t work. Such an approach fails to understand that "AI Writing" is not a single identifiable entity. While default, unmediated language output started to reveal itself to have certain recognizable characteristics, such as over-structured prose and a predilection for summarizing sentences and paragraphs, it needs to be understood that these tools could work with language as clay, molding it across a range of genres, tones, and purposes. Any framework that would be designed to catch the common patterns of AI Writing would also—or instead—catch the overuse of standard language structure/conventions. Thus, there will inevitably be false positives as student writing is flagged as "cheating" or "plagiarism," and research has revealed that not only is this inevitable, but that English language learners who tend to produce this highly structured prose are the most vulnerable to this approach (Liang et al.). Sadasivan et al. (2023) add that "as language models become more sophisticated and better at emulating human text, the performance of even the best-possible detector decreases" (p. 11). For advanced enough language models, "even the best detector can only perform marginally better than a random classifier" (p. 20).

Third, dismissal isn't effective either. The release of ChatGPT marked the beginning of widespread adoption of LLM tools. Ethan Mollick delivered a screed in March 2023 that argued that under-cover use of the tool was already much more common than people might have thought. Mollick suggested that every case of obvious "AI Writing" evidenced only lazy or unedited interaction with the tool, and that much more carefully AI-constructed language was happening all around, across all fields, and appearing in every class. This technology was here, it was being used, and simply ignoring it wouldn’t make all that go away. In light of that, my argument became clear: we need to address this technology, let students know what the tool can do, what it can’t do, that there are a range of ways to interact with the tool that fall across a spectrum of both ethics and effectiveness, and that students need to be mindful and critical when engaging with the technology. None of that can happen if the tool is lumped in with other forms of "cheating" and "plagiarism" and swept into the shadows.

If it is unavoidable that GPT technology becomes part of the writing process, it must then be part of the teaching process: we have to engage with it. Fortunately, discussing, working with, learning, and critically assessing the technology can by itself be excellent pedagogy and GPT technology can bring much to the classroom, both in terms of critical and rhetorical discussion of how it works, and the evaluation of what it produces (see Chan & Hu, 2023; Domenech, 2023; Meyer et al., 2023; Mogavi et al., 2023; Mollick & Mollick, 2022; Rahman & Watanobe, 2023; Rudolph et al., 2023; and Sharples, 2023).

With these points in mind, in April 2023 I delivered a presentation to my colleagues to help forward and discuss these important points and areas of consideration. The presentation concerned two perspectives needed in education: what teachers need to understand about AI writing tools and what students must grasp to use them effectively. I wanted to push teachers beyond simple detection-and-prohibition approaches and to help them see that AI tools like LLMs represent a paradigm shift in writing technology. The presentation focused on how AI can serve as a valuable "language toy" for developing meta-writing skills, while acknowledging its growing role in professional and academic contexts. For students, my focus was on developing critical awareness of AI's capabilities and limitations, understanding its proper place in the writing process, and maintaining authentic authorship while leveraging these new tools responsibly. The presentation points are scripted here.

Jennifer:

The presentation that Dan cites above resonated deeply with me, and I responded by not only rethinking my pedagogical approach but also by recognizing that our program needed an AI policy statement in order to provide clarity and promote best practices for students and faculty alike. One of my responsibilities in our department is training our new TAs to teach our first year writing course, and I was keenly aware that as new teachers of writing, the TAs in particular were going to need some guidance about how to deal with this new technology in their classes. I approached Dan and asked him if he would be open to working together on developing such a statement and suggested we convene an ad hoc committee to draft it. His instinct was that with our Department Chair’s blessing, we could move more quickly and nimbly by drafting it ourselves and then seeking feedback on the draft from the faculty in our department.

Dan began drafting, and together we shaped a first draft of a potential "AI Writing Policy," which sought to give both teachers and students a range of important ideas to keep in mind when thinking through their approaches to the technology, either in their classes or in their work. We wanted to provide guidance, steer teachers clear of knee-jerk reactions to the technology, help teachers think through the possible advantages and risks, and give them language to share with their students. Mindful of the range of faculty responses, we wanted to arm our colleagues with the most essential concepts and then create space for them to carve their own paths in terms of their classroom policies.

Once we were satisfied with our rough draft, we called a meeting so that we as a program could discuss and consider it. Responses to the draft from faculty were in general highly engaged, constructive, and progressive. The hour went by quickly as we grappled with important questions of how to talk about this technology in the classroom, how to encourage students to think about it critically, and as a way to help them develop their own voices and thinking, rather than to think of the tool as a crutch or a short-cut. In terms of the policy itself, it was suggested that this range of ideas for both students and teachers might better be codified through a series of major key points which built the pillars of the approach, which could then be unpacked through smaller pieces of advice for both teachers and students. We readily agreed and subsequently revised the document to incorporate this change and other key points that arose from the conversation.

UCSB Writing Program AI Policy

After much back and forth, the language of the final draft of the Writing Program AI Policy read as such:

The UCSB Writing Program recognizes the swift growth and widespread use of artificial intelligence (AI) writing technology such as large language models (LLMs) and chatbots. As writing and rhetoric specialists committed to preparing students to write for academic, professional, and civic engagement, we emphasize the importance of rhetorical, communicative, and continual engagement with developing writing throughout the process of composition. Given the expanding role that large language models will undoubtedly play in our students' lives, we encourage highly mediated, critically-aware, and transparent use of AI writing technology, focusing on four key policy points:

  1. Integrating AI as one of many supportive feedback tools: We advocate for the use of AI writing technology situated within a range of collaboration and communication classroom activities, tools, and resources, to complement students' critical thinking and creative expression through continual feedback, suggestions, and insights, along with regular drafting, peer review, and feedback from instructors.
  2. Promoting academic integrity and honesty: We expect students to maintain academic integrity and honesty while using AI writing technology, acknowledging any and all assistance received from these tools. To foster a culture of responsible AI usage, we encourage teachers to:
    • Understand that AI language tools involve a range of uses across a spectrum of authorial agency and effort. Communicate clear guidelines and expectations regarding the use of AI writing technology in the classroom, and transparency on which assignments and which parts of the writing process the use of AI writing tools may or may not be included, and the need for proper, clear, and continual attribution of any use of AI writing tools.
    • Instill in students the importance of outside research in double checking all information produced by AI writing tools, understanding their tendency to "hallucinate" incorrect or fabricated information. Students must research, revise, verify, and take ownership of all output mediated with AI writing tools.
  3. Fostering critical thinking about AI writing technology: We aim to instill in students an awareness of the critical implications of using AI writing technology, such as potential biases and fairness concerns. We encourage teachers to stimulate classroom discussions on AI ethics, including potential biases in AI output and data privacy. Impart on students their responsibility to nurture and protect their individual authorial voice, ideas, and intellectual property as they critically assess the output of any AI writing tool.
  4. Exercising caution with AI detection tools: We emphasize being aware of the significant challenges and risks associated with the use of AI detection tools. We encourage teachers to rely on their expertise, judgment, and a diverse range of assessment strategies to accurately evaluate and grade students' work.
    • Understand that AI detection tools are not always accurate, are easily fooled, and are prone to producing "false positives" which may involve severe and undeserved consequences to students.
    • Be aware of the potential risks associated with AI content detectors and other AI tools used for assessing student assignments, such as the possibility of student content being stored, accessed, or used for AI development and training purposes.

To ensure the responsible incorporation of AI writing technology in the classroom and its ethical use, we have provided a range of strategies and guidelines for both teachers and students. These encompass suggestions for teachers who wish to incorporate critical engagement with AI writing technology, such as ChatGPT, into their writing courses, as well as guidelines for students using AI writing technology in their coursework.

Suggestions for teachers who wish to incorporate critical engagement with AI writing technology, such as ChatGPT, into writing courses:

  • Inform students of AI's wide range of potential applications that fall along a spectrum of authorship. AI might be prompted to serve as co-author, editor, formatter, paraphraser, phrase-level thesaurus, word-level thesaurus, grammar aid, peer brainstorming partner, audience member, tutor, etc. Which activities might be permitted at which stages of the writing process?
  • Promote experiments with examples of rhetorical strategies by prompting the AI to incorporate authorial voices, organizational strategies, alliteration, the use of logical appeals, tempo, hooks, fragments, internal rhymes, etc. Help students identify and evaluate the application of these rhetorical moves.
  • Stimulate discussions about writing processes, strategies, and ethics through discussions of AI writing technology. Work with your students to generate diverse examples, compare and contrast them with student work, and examine their strengths and weaknesses collaboratively.
  • Encourage student creativity and curiosity by leveraging AI writing technology to create prompts, topics, or questions for exploration. Challenge students to interrogate how AI writing technology can help them compose pieces across various genres, styles, and perspectives.
  • Invite students to utilize AI to generate text in specific genres in order to recognize and identify genre conventions and reflect upon the role of audience, purpose, and context in developing rhetorically effective prose.
  • Encourage students to compare AI-generated text with human-generated text to see how individual agency, voice, and ethos impact text.
  • Highlight the necessity of integrating AI writing tools into the research process with caution, given that LLMs can "hallucinate" false facts, statements, or sources. Urge students to cross-check AI-generated information and develop critical appraisal skills to maintain the credibility and precision of their work.
  • Examine and address potential biases and fairness concerns that may arise from AI writing technology, including the perpetuation of stereotypes or the exclusion of specific perspectives. Promote critical thinking and discussions to recognize and counteract biases in AI-generated content.

Guidelines for students using AI writing technology, such as ChatGPT, in writing courses:

  1. Seek Guidance from Your Professor: Until or unless the University initiates a policy regarding the use of these tools, professors may have varying policies regarding the use of AI writing technology in the classroom, so it is important to defer to your professor's guidance. Be sure to ask your professor about their policy on using AI writing tools like ChatGPT, and adhere to their requirements and expectations when incorporating these tools in your coursework.
  2. Acknowledge the Use of AI: If you incorporate ideas, text, or inspiration from AI writing tools in your work, be sure to acknowledge their use in your author's notes or attributions.
  3. Think Critically: When using AI writing tools like ChatGPT, remember that they are not perfect and may not always provide accurate or relevant information. They are prone to "hallucinating" convincing, yet incorrect, information. It is essential to critically evaluate the output and cross-check any facts, claims, or sources provided by the tool. Remember that these tools don’t "know" things; they just return the word patterns they were trained on.
  4. Protect your Rhetorical Sovereignty: While AI writing tools can be valuable in providing inspiration or generating ideas, it is crucial to ensure that your own voice, creativity, and innovation are not surrendered to the AI. Use these tools as a starting point or a supplement, but not a replacement, for your own thinking and writing.

Resources and Reading

  1. ChatGPT Resources for Faculty – University Center for Teaching and Learning (pitt.edu)
  2. Academic experts offer advice on ChatGPT (insidehighered.com)
  3. Don’t Ban ChatGPT in Schools. Teach With It. (nytimes.com)
  4. A Teacher's Prompt Guide to ChatGPT aligned with 'What Works Best' (usergeneratededucation.wordpress.com)
  5. My class required AI. Here's what I've learned so far. (oneusefulthing.org)
  6. AI Text Detectors

This policy is born from weeks of discussion, back and forth emails, and Google Doc comments. Its development coincides with the development of the Writing Program’s grapple over the very conception of what this technology is and what it means to us as writers, pedagogues, and teachers.

Taking it to the Classroom

Jennifer:

In addition to spurring my involvement in the development of the policy statement, Dan's presentation also ignited a turning point in my pedagogical approach to the rise of LLM's in that while I was initially committed to showing students how these models would not meet their needs, my thinking quickly evolved to embrace what Dan had been peddling all along. As of Summer 2023, I began encouraging my students to use ChatGPT or similar tools at any or all stages of their writing processes, so long as they commit to later reflecting on how they used it, what it yielded, what revisions they did or did not make to AI-generated text, etc. I've come to believe that the real learning happens in students' articulation of these questions and that as they write about their experiences with ensuring the effectiveness of their texts, they are honing their rhetorical skills and their understanding of their writing and what makes it work. In this process, they are becoming better and more engaged writers.

I have embraced two key principles about students' use of AI text generators. First, I have accepted that our students will be entering a workforce where these tools will be readily available and where they will be expected to use them effectively. It then stands to reason that I would be remiss in not encouraging them to begin using the tools now, while they are still in school. Second, I have realized that in order for these tools to be used effectively, users must develop solid prompt engineering skills, which may not come as naturally to folks as the more intuitive google searches have come to most people. As such, students need opportunities to develop these skills, and opportunities to do so should be built into their writing classes.

It is ironic to me that the chat mechanism that enabled ChatGPT to explode in popularity—thus inciting panic in many educators—is also the means to enabling students' critical thinking and engagement about writing in new and fruitful ways. Whereas previous technologies such as browser search bars require users to simply input key words or phrases, ChatGPT works best when users engage it in a dialogue. Doing so effectively requires critical engagement, as users must consider how to prompt it to yield something valuable and then carefully consider that output to determine how rhetorically effective it might be, given the rhetorical situation. This process is exactly the sort of engagement that I hope for my students to participate in, as I want them to develop an ability to determine what does and does not work in a given text and to be able to articulate why. In this way, LLMs like ChatGPT have the capacity to support my teaching goals, rather than to detract from them. As I recently overheard someone say at a conference, "In the age of LLMs, writing will likely become rewriting." While both the veracity and the efficacy of this statement can be debated, it does seem clear that at least for now, LLMS require much more engagement and authorial intent to yield results than does say a google search.

And while some students may intuitively recognize and be prepared to engage in a fruitful dialogue with tools like ChatGPT, others may need support in learning how to prompt these tools to produce useful outputs. If my role is to prepare students for the writing they will do personally, professionally, and civically once they leave the university, then arguably the advent of tools such as ChatGPT have expanded my responsibilities to help students prepare to do this work effectively in a society that will undoubtedly feature AI tools and expect those entering the workforce to utilize them and engage with them in productive ways.

Moreover, as someone who has taught upper-division business writing courses for the past 20-plus years, I have realized that ChatGPT and tools like it can help solve a conundrum I have long wrestled with in those courses. My business writing courses have been the site of dual but conflicting messages that on the one hand, students should have a right to their own languages, as the National Council of English Teachers argued in 1974, while on the other hand, business writing requires adherence to Standard American English conventions. Until recently, I had three not-so-great avenues for handling these conflicting principles: 1. I could give these students a poor grade for their inability to generate texts with "flawless" Standard American English; 2. I could offer them extra help (read: I could work with them line-by-line to show them the grammatical issues in their texts) in their efforts to meet these standards; or 3. I could send them to the writing center in the hopes that the tutors there could help them identify patterns of error and help them edit their texts. LLM applications give me a fourth, and I am now thinking, far better, option which is to invite students to run their prose through the technology for help with grammar and style conventions. These models thus harness the potential for increasing equity as students—particularly those for whom English is not their first language—can utilize them to "clean up" their prose and make it conform to academic and professional expectations.

Dan:

Such interesting pedagogical potential in that. Every student now can have a personal tutor if they know how to ask for it. But knowing what to ask for, and how, is the key, here. Students can ask an LLM to rewrite/restructure/modify a paragraph, or a sentence, or even revise a word. Or they can take this one step further and ask for multiple revisions. Or they can ask for multiple revisions alongside a paragraph that reflects on what choices were made in the construction of each revision and why. Every approach falls at different points on a spectrum of student agency, reflection, and potential for growth. Our position as teachers, then, is to push them along that spectrum to get them to really interact with the tool, think through the choices at play, and critically engage. In my classes, I present a version of the same info I produced for my colleagues with the AI Writing Primer and the Policy. These are the three golden rules I drill in each class:

Here are a few guidelines to help you make the most of AI writing tools while maintaining your own voice and rhetorical agency:

Think Critically: AI writing tools such as ChatGPT do not "know" things, they just produce language patterns. They are prone to "hallucinating" what may be convincing, yet often incorrect and/or uncitable information. It is essential to critically evaluate the output and cross-check any facts, claims, or sources provided by the tool. In general, you shouldn't rely exclusively on these tools for research or content. You need to personally provide the intent, the ideas, and the research.

Acknowledge the Use of AI: If you incorporate ideas, text, or inspiration from AI writing tools in your work, you must acknowledge its use in your author's notes or attributions. How did you use it? To what extent? In what part of the writing process? How did it help? This will help you develop your reflection and metacognition, and will promote a culture of reflective, responsible, transparent AI usage.

Protect your Rhetorical Sovereignty: It is crucial to ensure that your own voice, creativity, and innovation are not surrendered to the AI. Remember that rhetorical utterances are powerful: a statement can shape our view of reality. If the tool creates a sentence, you will be influenced by it, and you may never know what network of thinking could have been created if you had built the sentence yourself. We write to learn and we write to think; Don't let these tools control your thinking.

After frontloading the areas of critical concern for them, I try to guide students into thinking through the technology conversationally, iteratively, and rhetorically. I tell my students the following:

ChatGPT tends to produce repetitive, formulaic text. It won't necessarily innovate or surprise you. It also can't do real research—it may even fabricate information. It doesn't actually "know" anything on its own. It just mimics patterns. However, ChatGPT can help you brainstorm ideas, play with language, and iterate on drafts. Ask it for multiple revisions to improve responses. Give it explicit instructions and source material to produce better results. Treat it as a tool for generating language, not facts. Verify anything it says through outside research.

Don't stop at just one input and output: ChatGPT is a conversation. Go back and forth. Iterate with it. Use ChatGPT to experiment with different voices, genres, styles, hooks, and argument flows. Ask it for word choice suggestions and metacommentary on your writing. Just don't let it override your own thinking and goals. Make sure to attribute any language you get from ChatGPT. The key is using ChatGPT as a mediated tool to augment your skills, not replace them. Our discussions will focus on using AI ethically and effectively to support your writing process.

In my classes I've run multiple discussions with my students about this tool, asking them about their experiences with it, and running through the critical points, cautions, and potential uses of the tool. I would then break the students up into groups and ask them to play with this tool themselves, creating and then discussing, evaluating, and revising (either through prompt iteration or by hand) the writing that it produces.

Looking Ahead

As the technology continues to develop at break-neck speed, our own and our colleagues' experiences with it continue to evolve. In order to reveal some of the ranges of conversations, takes, and developing approaches, we include here a video of interview clips from some of our Writing Program colleagues detailing their thoughts, approaches, and experiences to date with Large Language Models such as ChatGPT in their classrooms. While it's clear that there is hardly a consensus among our department's faculty in regard to these tools, we are pleased to see the conversation continue to unfold and by the careful thought we see evident in their responses.

Video Transcript

James Donelan: James Donelan: I've been experiencing ChatGPT and other LLM tools in the classroom mainly as a negative influence. I teach a lot of skills at a lot of levels having to do with writing, from brainstorming and outlining to drafting and editing, and these are skills that I think need practice and exercise to develop. When I speak to students about what they do with ChatGPT, when they're trying to do something honest, what they tell me is that they're using it to get some ideas, or they're using it to help create an outline, or they're using it for proofreading, on the other end, but they're really doing the actual writing. Well, the actual writing is from start to finish, and skills that are unused do not develop. People make a lot of comparisons between ChatGPT and the introduction of calculators into math classes. And this is something, actually, the math teachers were pretty careful about. They managed to gradually introduce calculators into classrooms. They knew when to use them and when not to use them. They were pretty clear about what skills they were trying to develop at which stage, and whether those skills were forming.
James: This introduction of ChatGPT seems to me a little bit haphazard. What exactly the product is, whether it's of the right quality, whether it represents the thought of an actual human being, or represents a reflection of reality or genuine research, all these questions are up in the air, and yet it's everywhere. Students are using it; professors are using it. A more gradual and more thoughtful introduction would have been welcome, but that's not how things work these days.
James: As far as my own experience as an educator, I am trying to address the situation consciously, but so far, I haven't seen much upside. I can't think of a particular thing that the student can do with an LLM tool that they can't do better and more consciously under other circumstances. I realize that sounds very anti-tech, which I'm not, but this particular tool I don't think is ready for prime time.
Paul Rogers: My experience with ChatGPT in the classroom has been positive. I've tried to have an open discussion with students about ChatGPT. I take the major assignments that I do, the prompts, and I push them through ChatGPT in front of them, so they can see what ChatGPT generates, and I certainly try to have open discussions with them about what they know and surface their background knowledge.
Paul: I'm also kind of looking for the professional guidance and sharing the professional guidance from various input sources like the APA, or journals, or the Screenwriters Guild, or other kinds of professional guidance that's being offered around how to use ChatGPT, and I think that's been interesting for the students.
Paul: As far as the ethical boundaries and guidelines, what I try to do is suggest to the students that we're not looking to try to get as close to the ethical line of plagiarism and see what we can get away with; what we're trying to do is actually create a big gap between that, so there's never a question that we would ever be accused, ever, of plagiarizing or presenting ChatGPT as if it was our own writing.
Paul: I think that finally I'll just say, I think that ChatGPT as a brainstorming partner has some real possibilities. They do an assignment in writing 107 WC where I have students brainstorm the types of content that they could create in order to meet the needs of individual users within particular communities, whether those communities are ultimate frisbee, or people studying abroad for the first time, or future doctors or whatever, and want them to see that there is some value as a brainstorming tool in ChatGPT.
Paul: But ultimately, I think that ChatGPT is here to stay, and I think it's really important that we're having this conversation. I find ChatGPT to be a little bit of a bully; kind of pedantic, it's so sure of itself, and offering me this material that, you know, I'm never going to be able to create text as fast as ChatGPT does, but I find the writing itself to be somewhat lifeless. So, I think that it's a wave that we have to go through, and I really want to thank the leaders of the writing program for facilitating this discussion. I'm learning a lot. So thank you very much.
Christine d'Anca: So as more and more academics began incorporating ChatGPT into the classroom, I began thinking of how I would as well. Honestly, at first, I was hesitant, not because I didn't think it could work, but because I didn't know how. I didn't even know where to begin. Nevertheless, I was determined to show my students that using this technology was not going to get them what they desire.
Christine: Recalling Ben Rafoth's article, Why Visit Your Campus Writing Center" (Writing Spaces 1, 2010), in which he argues that students will ultimately do whatever it takes to get what they need in order to receive a better grade. So I thought if only I could demonstrate to them that ChatGPT is not going to provide them with the sufficient materials to earn good grades, they would stop using it and the problem would solve itself.
Christine: I was on the right track, but had not yet realized I was going about it wrong. As I began playing around with different potential lesson plans, I was inspired by Sydney Dobrin's Talking About Generative AI: A Guide For Educators" (Broadview Press, 2013). I came up with a mini lesson that would have my students deconstruct an AI generated response to a prompt, in order to reverse engineer the writing process that is not transparent in the final result, because a chatbot does not show its work, or produce rough drafts in the process of coming up with its answer. Then we could discuss the different rhetorical shortcomings. After which I'd have students revise the text, incorporating the different elements that would strengthen it while also having them conduct a meta cognitive reflection of their process. This would get them to learn and implement the various writing and literary conventions I teach, while also becoming better acquainted with the underlying mechanisms of Generative AI.
Christine: I was very pleased with the idea, but then decided to go a step further. So, I decided, at first have my students use ChatGPT to generate an outline for a paper prompt and see what we could do with it. While the outline produced was seemingly comprehensive and very, very lengthy, my students were able to immediately identify a large problem with it: the outline provided only basic ideas about what the different sections of the paper should cover, without actually giving any details as to what those ideas should be. So, for example, it would say something like, "identify the main arguments and evidence the author presents." OK. So what are those arguments? What are those main ideas? So I asked ChatGPT to get more specific. It generated a new outline. While it rearranged some of the points, the examples that provided were still very generic.
Christine: My students were initially upset but it doesn't actually give them any of the important information that they can incorporate into their papers. Yet it soon became clear that this is not in fact a problem but rather a solution. It allows them to use the technology to help organize the various components of the paper while doing all of the heavy lifting themselves. Thus, the outline becomes a guide for them to work with the text, and find the necessary information themselves, identify the arguments, the uses of evidence, et cetera. In other words, ChatGPT provided the perfect starting point, without doing any of the actual work for the students. It's no different than working with a template like the similar ones I've given them in class, which basically makes the process ethical, and also beneficial as from the multiple points created by the outline, they can pick and choose those that address the prompt before flushing it out with details from the text.
Christine: So, I'm still playing around with all of these ideas, and I look forward to actually seeing how my students interact with these processes over the academic year, and learn from these experiences in order to produce better assignments in the future.
Chris Dean: So, the question is, what's your experience been with LLM tools such as ChatGPT in the classroom so far? So last year, particularly in Writing Two, we actually did a little bit of work with it around actually a prompt that we were using; I didn't use any student work because for me, that's a big no-go, don't want student work to be trained on by ChatGPT, but my prompt, that's fine, and the responses we got back were, predictably, very ChatGPT like; largely general, pulling from web sources, all the sorts of things that the prompt actually tells them not to do. What was very interesting though was my students asking them, based on the prompt, to try and create a 4-to-6 page paper. And this was an argumentative piece in Writing One at the time, and we got ChatGPT at the time to really kind of hallucinate like crazy when it was pushed on argumentative topics, which I thought was pretty interesting.
Chris: This quarter, what I'm doing is trying to get each and every class that I have to think about ChatGPT, first of all, and then we're gonna create language, and we're in the process of doing that next week, around the syllabus, and sort of how we agree that we should use ChatGPT right now. The big things are, you know, we're going to cite everything, we're not going to use it for final drafts, and we're going to agree to use it, both them and I, based upon some of the things that we want to say. We're also going to have, particularly in my Writing 109 ED class, since it's about the teaching of writing and education just more generally, some actual experiments with ChatGPT to see how it's used, because it's already being used, particularly in K through 12, by a lot of teachers, certainly a lot of students, and we have everything from blanket bans to people really sort of trying to push the technology and to see what it can do.
Chris: So it's gonna be definitely a part of the class, probably about three or four times in 109 ED, and a couple of times in Writing One and Writing Two, but the big thing is I think the co-construction of syllabus language and a talk around what it is that ChatGPT actually is and isn't, and how we sort of are going to agree to ethical standards for how we all use it.
Katie Baillargeon: So my experiences with ChatGPT were that once I started fiddling with it, I realized, in retrospect, that some of my students had clearly been using it, and had been using it for some of the larger assignments, assignments that I had kicked back, actually, as being incomplete, not responding to the prompt, and being problematic enough that they needed to rewrite them. And so, I then thought, OK, what am I going to do about this? And I had two ideas in mind with that "What am I going to do?". Number one was that it's a tool that's available to them, and that can save time when used responsibly and to create a rhetorically effective piece, so I can't really blame them for wanting to use that.
Katie: And then second of all, at the moment, back then, when I didn't know or think about that, they would have been using ChatGPT to I guess, draft their work, I just assumed that it was an early stage draft, and the student had run out of time, and submitted that instead of a final piece. So, with those things in mind, and with knowing as well, my friends and family who work in non-academic sectors, and use ChatGPT to help them draft their own pieces of writing—and it's always the draft stage, right? This isn't the final thing, it's something to start with and to kind of boost them to another level so that they can save time. So then I thought, well, is it not then my position as a writing instructor at a university to help students figure out, 'OK, how can I use this tool responsibly to create rhetorically effective pieces?' So that's what I do in my classes; in class on any given day with a particular genre, like a literature review—so it's OK, you've got your own list of 20 plus sources with abstracts that you've kind of already started to think about connections between these works, right now ask ChatGPT to write a literature review on your research topic. And all right, what do we have to discard? Because there's a lot to discard, we can't use much of it. And then what are the kernels of things that we can build upon and improve upon with what I know? So there's that human element to it. It does save a little bit of time, it kind of gives some help and some ideas for drafting, but it is not the end product at all. And I feel like that is, hopefully, the best way I can use it, for now, until it changes again. But yeah, so those are my experiences.
Victoria Houser: Hi, my name is Victoria Houser, and I'm an assistant teaching professor in the writing program at the University of California, Santa Barbara. I've been using ChatGPT in my classrooms in a few different ways, and I haven't used it yet this quarter, but I have used it in the past to help me do things like generate rubrics, and even revise project prompts, and do things like also generate questions for freewriting responses.
Victoria: And so everything that I've asked ChatGPT to do comes from a place of understanding what that tool is while we're using it, and before we use it, and after we use it. So, the discussion I have with my students is centered around how do we engage with this technology as human beings who also have nuance and all kinds of unique attributes to what we're bringing to our writing? What do we do with a tool like this? And so that was really the first step that I took was to generate a rubric for an assignment in class with my students, and we looked at what ChatGPT brought up and there were lots of things that my students said, ' I would like to see this thing not on there, or I would like to see this thing included.' And so we were able to ask ChatGPT to make those changes for us. And that discussion led us to think through together how ChatGPT is really a tool that requires a certain level of critical reflection and rhetorical thinking, right? Thinking about the ethics around it, and how we use it, and why we use it, and what it does for us as writers. And so, of course, this is still an evolving and ongoing conversation, but I look forward to seeing how my students pick this up. And I think one of the directions I would love to see this go is to kind of demystify what ChatGPT is for students, right? And for educators, of course, as well, that it isn't necessarily this giant evil plagiarism churning device, right?
Victoria: It is something that, like all technologies, we can engage with critically and reflect on what it means for us, what it does for us, and what we do in the context of the space itself. So I look forward to seeing the future conversations that come from reflection such as this.

There is still much to figure out. The conversation is by no means over, and the answers are far from set in stone. Indeed, much of the approach heard in both this video and in our reflections relies on the fact that without hands-on mediation and reflection, the writing itself won’t be good enough to do the job. This might not always be the case. What we must promote to students, then, is the value and importance of their own reflection, voice, and the power and importance of taking ownership over the choices involved in interacting with the tools. Inherent in this—and this came up several times in faculty discussions—is the question of motivation. Why is a student writing? What are the goals? Does the student have a purpose in mind? Do they have a desire to develop knowledge, expertise, an individual and powerful voice? Or is the student going through the motions in order to get the grade and move on? Unfortunately, the grade-based, transactional structure of the traditional educational apparatus tends to promote the latter rather than the former. There very well may be a point down the line when these technologies are advanced enough to make this transactional structure untenable. If we get to that point, we will all have a lot of work to do. A lot to rethink. But forgive a hint of optimism here when we suggest that perhaps we might restructure for the better.

References

Adiguzel, Tufan, Kaya, Mehmet H., & Cansu, Faith K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429.

Bogost, Ian. (2022, December 7). ChatGPT Is Dumber Than You Think. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/

Chan, Cecilia K. Y., & Hu, Wenjie. (2023). Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education (arXiv:2305.00290). arXiv. https://doi.org/10.48550/arXiv.2305.00290

Conference on College Composition and Communication. (1974). Students' right to their own language. College Composition and Communication 25(3), 1–32. https://cdn.ncte.org/nctefiles/groups/cccc/newsrtol.pdf

Domenech, Josep. (2023). ChatGPT in the classroom: Friend or foe? Proceedings of the 9th International Conference on Higher Education Advances (HEAd ’23), 339–347. https://doi.org/10.4995/HEAd23.2023.16179

Frank, Daniel. (2018). Microworld writing: making spaces for collaboration, construction, creatvity, and community in the composition classroom [PhD Thesis]. Clemson University.

Halaweh, Mohanad. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2). https://doi.org/10.30935/cedtech/13036

Hicks, Troy, DeVoss, Dànielle N., & Eidman-Aadahl, Elyse. (2010). Because digital writing matters: Improving student writing in online and multimedia environments. John Wiley & Sons.

Itō, Mizuko. (2010). Hanging out, messing around, and geeking out: Kids living and learning with new media. MIT Press.

Kasneci, Enkelejda, Seßler, Katrin, Küchemann, Stefan, Bannert, Maria, Dementieva, Daryna, Fischer, Frank, Gasser, Urs, Groh, Georg, Günnemann, Stephan, Hüllermeier, Eyke, Krusche, Stefan, Kutyniok, Gitta, Michaeli, Tilman, & Nerdel, Claudia. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Scite.Ai. https://doi.org/10.35542/osf.io/5er8f

Liang, Weixin, Yuksekgonul, Mert, Mao, Yining, Wu, Eric, & Zou, James. (2023). GPT detectors are biased against non-native English writers (arXiv:2304.02819). arXiv. https://doi.org/10.48550/arXiv.2304.02819

Marche, Stephen. (2022, December 6). The college essay is dead. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/

Meyer, Jesse G., Urbanowicz, Ryan J., Martin, Patrick C. N., O’Connor, Karen, Li, Ruowant, Peng, Pei-Chen, Bright, Tiffany J., Tatonetti, Nicholas, Won, Kyoung J., Gonzalez-Hernandez, Graciela, & Moore, Jason H. (2023). ChatGPT and large language models in academia: Opportunities and challenges. BioData Mining, 16(1), 20. https://doi.org/10.1186/s13040-023-00339-9

Mogavi, Reza H., Deng, Chao, Kim, Justin J., Zhou, Pengyuan, Kwon, Young D., Metwally, Ahamed H. S., Tlili, Ahamed, Bassanelli, Simone, Bucchiarone, Antonio, Gujar, Sujit, Nacke, Lennart E., & Hui, Pan. (2023). Exploring user perspectives on ChatGPT: applications, perceptions, and implications for AI-integrated education (arXiv:2305.13114). arXiv. http://arxiv.org/abs/2305.13114

Mollick, Ethan R., & Mollick, Lilach. (2022, December 13). New modes of learning enabled by AI chatbots: Three methods and assignments (SSRN Scholarly Paper 4300783). https://papers.ssrn.com/abstract=4300783

Mollick, Ethan R., & Mollick, Lilach. (2023). Assigning AI: Seven approaches for students, with prompts. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4475995

Papert, Seymour, & Harel, Idit. (1991). Situating constructionism. Constructionism 36, 1–11.

Rahman, Md. Mostafizer, & Watanobe, Yutaka. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Applied Sciences, 13(9), 5783. https://doi.org/10.3390/app13095783

Rudolph, Jürgen, Tan, Samson, & Tan, Shannon. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1).

Sadasivan, Vinu S., Kumar, Aounon, Balasubramanian, Sriram, Wang, Wenxiao, & Feizi, Soheil. (2023). Can AI-generated text be reliably detected? (arXiv:2303.11156). arXiv. https://doi.org/10.48550/arXiv.2303.11156

Selfe, Cynthia L. (2007). Multimodal composition: Resources for teachers. Hampton Press.

Sharples, Mike. (2023). Towards social generative AI for education: Theory, practices and ethics. https://doi.org/10.48550/ARXIV.2306.10063

Terry, Owen K. (2023, May 12). I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt

Wu, Yi. (2023, July). Integrating generative AI in education: How ChatGPT brings challenges for future learning and teaching. Journal of Advanced Research in Education, 2(4), Article 4.

Yancey, Kathleen B. (2009). Writing in the 21st century: A report from the National Council of Teachers of English. National Council of Teachers of English. https://literacy.wonecks.net/2009/03/16/writing-in-the-21st-century-a-report-from-the-national-council-of-teachers-of-english/