Drafting a Policy for Critical Use of AI

Daniel Frank and Jennifer K. Johnson

Presentation To Faculty (Talk Given 4/13/2023)

What Teachers Need to Know:

GPT can be used to cheat. But "cheating" needs to be defined. There is a vast range of rhetorical possibilities with this technology across a spectrum of student effort, agency, and engagement.

Having the conversation start and end at detection and banning aborts critical consideration of the technology and the roles it can play within the writing process.

  • It sets up a cat and mouse game where students just try to get ChatGPT content past the detectors.
  • In these cases, students aren't primed to think carefully or honestly about the technology, and teachers aren't engaging with the writing either: missed learning opportunities from every direction.
  • It's also a Sisyphean task. The technologies continue to grow in complexity and ubiquity, and AI detection has never been reliable.

GPT can be amazing in helping students develop awareness of writing from the meta-layer: to help them experiment with prompts, verbs, "says" vs. "does", genre, remix. This is a language toy, and language toys are good for kids.

This is a paradigm-changing technology, is becoming explosively ubiquitous, and is already a part of expected 21st century literacies. Jobs will expect experience in prompting and fluency in AI writing technologies.

  • The tech is growing beyond 'content' and into how we interface and interact with computers.

What Students Need to Know:

Students need to think carefully about how AI writing can integrate within the writing process and it isn't a magic bullet that can do everything for them.

They need to understand GPT's limitations (hallucinations, over-formulaic output, etc.,)

They need to understand GPT's risks (data privacy, loss of agency)

They need to take ownership, authorship, and responsibility over the writing produced.