Since this has been around for a while, there are actually dozens of great sources to read for a bit of context. I'll limit it to 5 suggestions here.
"Better Language Models and Their Implications," OpenAI blog, February 14, 2019
OpenAI is the creator of the GPT models. Scroll down the page to see a few of the examples of articles generated by GPT-2. You'll see some flaws. Note that it took several takes for each of these to be readable.
AI Weirdness blog; updated regularly
This excellent blog by computer researcher Janelle Shane contains enough examples of AI models gone haywire in the most amusing ways possible. It just might put your mind at ease!
"A robot wrote this entire article. Are you scared yet, human?" GPT-3, The Guardian, September 8, 2020
This op-ed was generated by GPT-3, and assures you that it comes in peace.
Wikipedia bots entry on Wikipedia; updated regularly
We all know that people use Wikipedia regularly, and why we advise students against it. This brief entry acknowledges the use of bots for rote tasks related to maintenance, and refers to an example of a bot used for article generation, Lsjbot.
"Higher Ed, Meet GPT-3: We Will Never Be The Same!" Ray Schroeder for Inside Higher Ed, August 24, 2022
The brief overview of GPT-3 in this column also gives some perspective on its potential impact on higher education. One statement that's food for thought is the author's assertion that "[m]any of the basic assumptions of education are brought into question by the advent of this level of human-computer interaction."
Comments