ChatGPT Is Not Your Friend

Mark C. Marino

Conclusion

AI can elicit quite a bit of fear, especially when it seems like it can perform our duties better than we can. I offer these activities as opportunities to do what humans still do best, to mess around, to test boundaries, not as service quality assurance testers but as unorthodox, unruly organisms who resist rules and rarely follow directions. To quote one of our players, when we presented him with the directions for our latest netprov, “Those are your rules.” He was determined to play by his own. Viva humanity.

While some courses are centered around tools and others centered around skills, I have had most success with courses centered around students. Looking back at the assignments that were most successful, I see exercises that invited students to play. Whether in netprovs or Turing Tests, students do not cease to surprise me.

For in the tool-centered course, we imagine these universal instruments that treat everyone the same. But as we unpack the white supremacist (heteronormative, etc.) training of ChatGPT and other LLMs, we see these machines for what they are, ways to whitewash the writing process, and just like standardized language enforcement, they tend to steam roll over students' individual voices. And as students incorporate more AI-generation in their writing process, they are in danger of silencing themselves, of taking themselves out of the equation.

Critical, creative, and playful uses of this technology with ready-made exercises that everyone can jump into offer ways for students to take on AI together and then to know it more fully, using those three words that are being systematically scrubbed from our universities: diversity, equity, and inclusion, words without which our language and culture become mere assembly lines for an unjust future with no humans empowered to critique it.