Michael K. Shaub, May 4th, 2023
I received a reassuring administrative email this morning with the following message:
“We are concluding our first full semester since generative artificial intelligence (generative AI, including ChatGPT) entered the popular consciousness and made considerable inroads into the educational landscape. We acknowledge the concerns people have, but we also understand the many new pedagogical innovations generative AI can potentially offer. Our challenge is safeguarding against the negatives and capitalizing on the positives. We will continue working with and for our faculty to do both, and we will engage with the Faculty Senate to develop a syllabus statement and other appropriate adjustments as we adapt to this new tool in our academic environment.”
Speaking here from the parapet of the fortress that is academic honesty, I have a one-word response: Hahahahahahaha! Nothing gives more reassurance to faculty members trying to maintain a level playing field for students than a forthcoming Zoom seminar on what Generative AI can do—and a syllabus statement. “Our challenge is safeguarding against the negatives” is equivalent to saying that our challenge with fraternities is to reduce the consumption of beer. It is in the DNA of these technological developments for people to use them for advantage, even if their originators intended to use them for progress.
I know that I sound unduly negative. Please understand that I teach professional skepticism as an Auditing and Accounting Ethics professor. Just like an auditor, it is my job to be professionally skeptical about these types of developments, especially in light of the fact that I have some level of expertise in preventing and detecting cheating in a university setting. As I would tell my students, “If you don’t say something, who will?”
Will ChatGPT lead to pedagogical innovations? Of course it will. So did COVID. As professors, we will learn to use the tools available to us, and there is the real possibility that this will lead to greater productivity for students, and perhaps for professors. Will it lead to greater creativity? That is an open question, but it is more likely to lead to the next TikTok than the next Immanuel Kant or Adam Smith.
But before it does that, I can tell you from the frontlines what it will certainly lead to, and that is a massive increase in undetected cheating. Since I study fraud, I know that it is impossible to accurately estimate the amount of undetected fraud. But try googling “undetected cheating” and you will find that the top hits are around cheating on online video games. Even those living in a fantasy world want to do whatever they can to make themselves look better than they are, and entire businesses support that. Think about the pressure for college students to do that in the real world to compete for scholarships and jobs. Many of our students measure themselves by a number, not by how they got it. Free riding in groups, for example, has grown astronomically based on my observations from my students’ journaling assignments. And almost no student has the moral courage to confront it; the honest ones do all the work so as not to threaten their grades, and they leave it largely unreported so as not to be a snitch.
ChatGPT is not just a running start toward a better essay. It is a substitute for the generative idea process that makes a great mind. It is degenerative in the sense that it undermines the growth that comes from spending time reflecting, ostensibly to free up more time to produce a product. Because, in the end, if you are simply being measured by a number, all you are doing is producing a product. And if you don’t hurt anyone doing so, what is the moral problem involved?
I have written before about Chegg (NYSE: CHGG), which is a publicly traded company whose platform exists ostensibly to provide online tutoring. Everyone on a college campus knows that the company exists solely to facilitate student cheating. Does the market know that? In a public disclosure recently, Chegg indicated that it had seen a slowdown in new users in March because of the adoption of ChatGPT. In the overnight hours after that announcement, the stock dropped 37 percent. The simplest explanation for that is that the market recognizes ChatGPT as a direct substitute for Chegg for cheating. In addition, ChatGPT is freely accessible and does not require a subscription. It is now time for Chegg to show, as people would politely say, product differentiation to support its share price. But cheating is a growth market. I would not be surprised to see Chegg’s stock price stabilize.
The COVID world showed universities that were late adopters how easy it is to run a program online. The margins for these programs are attractive and, once you make the technology investment, there are not the same pressures to maintain a certain level of enrollment as there are with, say, a traditional MBA program. So, we are seeing an explosion of new online programs at all levels and across disciplines, and an increasing dependence on them to grow revenues for the university.
One thing these programs lack is the serious ability to constrain cheating, just as powerful tools come onboard to enable it. Will the detection ability of generative AI catch up? Again, that is an open question. But universities, after making some early noise, have allowed Chegg to operate largely in peace. The threat to Chegg’s existence is not a reinforcement of honesty, but a cheaper and easier way to go lower ethically.
I may just be a meme, an old man shaking his fist at a cloud. But my observation is that there has never been more error in the measure that is GPA. The proliferation of online programs will only accelerate that. And even the faculty who care deeply about honesty are disheartened. You cannot develop an honor council process robust enough to handle this.
At some point, all that will be left to do is what many modern corporations and accounting firms do—shake your head and factor in the cost of scandal. And that point is closer than you think.
Categories: Bottom Line Ethics