ChatGPT Wrote This Article and then Totally Stole My Job!
e-Literate 2023-02-02
As I outlined recently in my “e-Literate’s Changing Themes for Changing Times” post, I am shifting my coverage somewhat. I’ll be developing and calling out tags I use for these themes so that you can go to an archive page on each one. This one will be listed under the “AI/ML” “third-wave EdTech,” and “future of work” tags.
I’ve been fascinated by the rapid progression of ChatGPT article fads:
- Look at this weird thing that writes stuff!
- I asked ChatGPT a question—and here’s what it answered!!
- I asked ChatGPT to write this article—and it totally did!!!
- Students could use ChatGPT to student essays write essays!!!! End of the world or totally awesome?????
- I asked ChatGPT for suggestions about preventing students from using ChatGPT to cheat—and it gave me five great suggestions (and five terrible ones)!!!!!!
Waitaminute. Let’s back up.
Students finding easy ways to cheat is not exactly a new thing. Remember, “to Chegg” is a verb now. Let’s back up to fad #3. Writers are asking ChatGPT to write their articles, publishing those articles, and then advertising that the articles published under their by-line were written in 30 seconds by a machine.
Do they want to get replaced by an algorithm?
It seems to me we’re thinking about the problem that these algorithms present in the wrong way.
At the moment, ChatGPT is a toy
Language-generating algorithms ChatGPT and their image-generating cousins are toys in both good and bad ways. In a good way, they invite people to play. Anyone, whether or not they know anything about programming, can explore the possibilities and limitations of a certain kind of cutting-edge AI. It’s fun. And, like all good games, you learn by playing. You can learn what they can and can’t do well. How to write a good prompt. And so on. They make AI more accessible and, in a certain way, more understandable.
They’re toys in the bad sense because they weren’t designed for any particular purpose and tend to do weird things for any given task. For example, I tried to get DALL-E 2—ChatGPT’s image-generating sibling—to create a retro comic book cover depicting ChatGPT as a scary vampire. I tried half a dozen different prompts. The best result I got was with “An evil vampire on the cover of a retro comic book called ‘ChatGPT'”. Here’s the best image it came up with:
“An evil vampire on the cover of a retro comic book called ‘ChatGPT'” as interpreted by DALL-E 2The algorithm does evil comic-book vampires pretty well. Spelling a word that was explicitly spelled for it in the prompt? Not so much.
Remember, a large language model algorithm like the one behind DALL-E 2 and ChatGPT don’t understand anything. It’s taking a bag of words and putting together pieces it found on the internet to construct something that seems like an appropriate response.
ChatGPT, for example, is notoriously bad at math. Because it doesn’t calculate. That’s not what it is designed to do. Its developers have been working on improving this limitation. So I asked it about how good at math it is:
Me: How good are you at math?
ChatGPT: I am able to perform basic arithmetic operations, such as addition, subtraction, multiplication and division. My training data includes mathematical concepts and I have been trained to recognize and respond to arithmetic questions. However, it is important to note that I may make mistakes in complex or unconventional mathematical computations.
Me What is 2794 + 83948?
ChatGPT: 87,742
That answer is incorrect.
So ChatGPT, the miracle of modern technology destined to destroy all education, is not as “smart” as a calculator. Note that you can ask Google or your Apple search prompt the same question and reliably get a correct answer. OpenAI (the makers of OpenAI) are likely trying to combine much simpler but mathematically reliable algorithms with the GPT-3 algorithm that underlies ChatGPT and DALL-E 2. The fact that they haven’t solved this problem yet suggests it is non-trivial. These different types of algorithms can be challenging to mesh. Many of the more impressive applications of artificial intelligence and machine learning are made possible by an ensemble of different algorithms working together, some of which fit together more naturally than others.
Don’t get me wrong; the math problem will be solved. The OpenAI engineers will crack it just as they are now pretty good at preventing ChatGPT from making stock predictions or answering questions about current events when its data set hasn’t been updated since 2021. But it’s a game of whack-a-mole. Because you can ask ChatGPT anything, people do ask it anything. The creators are learning a lot about the questions people ask and what can go wrong with the answers. This new knowledge will help them design more specific solutions. But a general-purpose prompt tool like ChatGPT will be hard to make good at solving any one particular problem.
I’m not convinced that ChatGPT, as it exists today, represents a big leap forward in essay cheating. It has length limitations, has to be fact-checked, can’t produce references, and spits out highly variable quality of reasoning and argumentation. Students would learn more by trying to fix the problems with a ChatGPT-generated draft than they would by going to a traditional essay mill.
Short answer questions are a different matter. ChatGPT is already dangerous in this area. But again, students can already “Chegg” those.
Yes, but…
Could somebody write a better program specifically for writing school essays? Or magazine articles? Yes. That work is already underway.
So what do we do about the essay cheating problem? Let’s start with the two most common answers. We can develop algorithms that detect prose that was written by other algorithms. That too is already underway. So we’ll have yet another flavor of the cheating/anti-cheating arms race that benefits nobody except the arms dealers. The anti-cheating tools may be necessary as one element of a holistic strategy, but they are not the ultimate answer.
Second, we can develop essay-writing prompts and processes that are hard for the algorithms to respond to. This would be useful, partly because it would be good for educators to rethink their stale old assignments and teaching practices anyway. But it’s a lot of often uncompensated work for which the educators have not been trained. And it ends up being another arms race because the algorithms will keep changing.
We miss the point if we respond to language-generating AI as a static threat that might become more sophisticated over time but won’t fundamentally change. ChatGPT is just a friendly way for us to develop intuitions about how one family of these algorithms works at the moment. You’re wrong if you think it is a one-time shock to the system. We’re just at the beginning. The pace of AI progress is accelerating. It is not just going to get incrementally better. It is going to radically change in capabilities at a rapid pace. It will continue to have limitations, but they will be different limitations.
So what do we do?
How about talking to the students?
When adaptive learning hit peak hype, a glib response to teacher hysteria started making the rounds: “If you [teachers] can be replaced by a computer, then you probably should be.”
Doesn’t that apply…um…generally?
If all students learn is how to use ChatGPT to write their essays, why wouldn’t their hypothetical future employer use ChatGPT instead of hiring them? Why would students spend $30K, $40K, $50K, or more a year to practice demonstrating that a free-to-use piece of software does their best work for them? Students need to learn the work these tools can do so they can also understand the work the tools can’t do. Because that is the work the students could get paid for. Technology will make some jobs obsolete, leave others untouched, change some, and create new ones. These categories will continue to evolve for the foreseeable future.
At a time when students are more conscious than ever about the price-to-value of a college education, they ought to be open to the argument that they will only make a decent living at jobs they can do better than the machine. So they should learn those skills. Why learn to write better? So you can learn to think more creatively and communicate that creativity precisely. Those are skills where the primates still have the advantage.
Once we engage students openly and honestly on that point, we will start building a social contract that will discourage cheating as well as the foundational understanding we need for rethinking the curriculum—not just to keep from falling too far behind the tech but to help students get out in front of it. The current limitations of these AI toys demonstrate the potential and dangers. Suppose you want to apply the technology to any particular domain. In that case, whether it’s math, writing advertising copy, or something else, you need to understand how the software works and how the human expertise and social or business processes work. Whole echelons of new careers will be created to solve these problems. We will need thinkers who can communicate. Learning how to formulate one’s own thoughts in writing is an excellent way to learn both skills.
Fighting the tech won’t solve the problem or even prevent it from getting worse. Neither will ignoring it. We have to engage with it. And by “we,” I include the students. After all, it’s their futures at risk here.
(Disclaimer: This blog post was written by ChatGPT.)
(I’m kidding, of course.)
(I am able to perform basic humor operations, such as generating dirty limericks and “your momma is so ugly” jokes. My training data includes humorous concepts, and I have been trained to recognize and respond to knock-knock questions. However, it is important to note that I may make mistakes in complex or unconventional humor.)
The post ChatGPT Wrote This Article and then Totally Stole My Job! appeared first on e-Literate.