I've recently seen some complaints from students online (across Reddit, ChatGPT, and Blind) who were indicating that they had been falsely accused of using generative AI when writing essays at their schools and universities. After seeing several of these, I decided to look into ZeroGPT (the top tool being used right now by academic organizations to crackdown on generative AI cheating), and what I found was more than a little concerning.
Falsely Accused
Imagine you are an undergrad student and business major, looking forward to finishing out your senior year and preparing to take your first steps into the real world. After turning in an essay on comparing and contrasting different software licensing models, you are informed that a new university tool has determined that your essay was AI generated. Because of this, you have been asked to stand in front of the University ethics committee and account for your misconduct.
Only problem is — you didn’t use generative AI tools to create the essay. In fact, you didn’t use AI tools during the writing process at all. But at this point, it is your word against the word of a cutting edge AI model who is saying unequivocally, that you are guilty. No matter what you say, there is a good chance that you will be expelled and your career will be destroyed before it has even begun.
Perhaps not so reliable???
Sadly, this is actually happening to students at this very moment. One of the leading tools for detecting AI generated content is ZeroGPT. The problem is, this tool touts itself as a reliable solution for detecting AI generated text.
The solution has claimed to have rigorously tested the system. But simple experimentation can quickly demonstrate just how unreliable the model can be. For example, I pasted the Bill of Rights (the first 10 amendments to the US Constitution) into ZeroGPT, the result indicated that the text was 100% created by generative AI (screenshot included below).
With that kind of result, there are only two possible explanations. Either James Madison (who authored the Bill of Rights) was actually a time-traveling AI system, seeking to influence foundational government documents to alter the course of history...
...or, the ZeroGPT model is highly unreliable. While the first possibility is intriguing, I think most people would agree that a broken model is a more likely explanation.
Even the first paragraph of this blog post seems to not sit well with the ZeroGPT model, which suggests that it "may include parts generated by AI", with 44.51% of the content flagged.
So ZeroGPT is clearly broken. In truth, you can’t even fault the creators for struggling to build an effective model to accomplish this. The task is monumentally difficult. You are essentially asking a classification model to determine whether text was created by humans, or by highly advanced AI systems designed specifically to emulate human written text. Ultimately even the machine-written text is still human derivative.
Given the vast range of variability in human writing styles, this task is already proving to be nearly insurmountable (at least to perform consistently and reliably). And unfortunately, it will only become more difficult as the complexity and diversity of Large Language Models increase.
The ZeroGPT model seems to evaluate unbiased and non-opinionated text as more likely being AI generated, and considers opinionated articles as more likely to be human generated (presumably because the leading language models do not offer opinions). Even more concerning, from an academic perspective, is that the model seems to conflate well-written and concise use of the English language with a higher level of confidence that the text was AI generated. As such, it is not hard to see how some highly proficient students could become the unfortunate collateral damage of an academic crackdown.
So regardless of your preferred analogy…
- There is no putting the genie back in the lamp.
- Pandora’s box cannot be re-sealed.
- The toothpaste ain’t going back in the tube.
…there is no going backwards to a simpler time when academic dishonesty could be reliably detected. With a lack of other reasonable alternatives, the academic community is going to need to adapt and accept (or embrace) the use of generative AI for academic writing.
This is not a moral argument for or against the use of such tools. I do not have a strong opinion as to whether the use of such tools should be considered academic dishonesty. Instead, the academic community needs to accept the use of these tools because any attempt to enforce or effectively penalize those using it, will result in a crazed witch hunt, which will inevitably target and persecute the wrong people.
Comments
Post a Comment