Skip to main content

ChatGPT and the Academic Dishonesty Problem

I've recently seen some complaints from students online (across Reddit, ChatGPT, and Blind) who were indicating that they had been falsely accused of using generative AI when writing essays at their schools and universities. After seeing several of these, I decided to look into ZeroGPT (the top tool being used right now by academic organizations to crackdown on generative AI cheating), and what I found was more than a little concerning.

Falsely Accused

Imagine you are an undergrad student and business major, looking forward to finishing out your senior year and preparing to take your first steps into the real world. After turning in an essay on comparing and contrasting different software licensing models, you are informed that a new university tool has determined that your essay was AI generated. Because of this, you have been asked to stand in front of the University ethics committee and account for your misconduct. 

Only problem is — you didn’t use generative AI tools to create the essay. In fact, you didn’t use AI tools during the writing process at all. But at this point, it is your word against the word of a cutting edge AI model who is saying unequivocally, that you are guilty. No matter what you say, there is a good chance that you will be expelled and your career will be destroyed before it has even begun. 

Perhaps not so reliable???

Sadly, this is actually happening to students at this very moment. One of the leading tools for detecting AI generated content is ZeroGPT. The problem is, this tool touts itself as a reliable solution for detecting AI generated text. 

The solution has claimed to have rigorously tested the system. But simple experimentation can quickly demonstrate just how unreliable the model can be. For example, I pasted the Bill of Rights (the first 10 amendments to the US Constitution) into ZeroGPT, the result indicated that the text was 100% created by generative AI (screenshot included below). 

With that kind of result, there are only two possible explanations. Either James Madison (who authored the Bill of Rights) was actually a time-traveling AI system, seeking to influence foundational government documents to alter the course of history...


...or, the ZeroGPT model is highly unreliable. While the first possibility is intriguing, I think most people would agree that a broken model is a more likely explanation.

Even the first paragraph of this blog post seems to not sit well with the ZeroGPT model, which suggests that it "may include parts generated by AI", with 44.51% of the content flagged. 

So ZeroGPT is clearly broken. In truth, you can’t even fault the creators for struggling to build an effective model to accomplish this. The task is monumentally difficult. You are essentially asking a classification model to determine whether text was created by humans, or by highly advanced AI systems designed specifically to emulate human written text. Ultimately even the machine-written text is still human derivative.

Given the vast range of variability in human writing styles, this task is already proving to be nearly insurmountable (at least to perform consistently and reliably). And unfortunately, it will only become more difficult as the complexity and diversity of Large Language Models increase. 

The ZeroGPT model seems to evaluate unbiased and non-opinionated text as more likely being AI generated, and considers opinionated articles as more likely to be human generated (presumably because the leading language models do not offer opinions). Even more concerning, from an academic perspective, is that the model seems to conflate well-written and concise use of the English language with a higher level of confidence that the text was AI generated. As such, it is not hard to see how some highly proficient students could become the unfortunate collateral damage of an academic crackdown.

So regardless of your preferred analogy…

  • There is no putting the genie back in the lamp.
  • Pandora’s box cannot be re-sealed.
  • The toothpaste ain’t going back in the tube.

…there is no going backwards to a simpler time when academic dishonesty could be reliably detected. With a lack of other reasonable alternatives, the academic community is going to need to adapt and accept (or embrace) the use of generative AI for academic writing.

This is not a moral argument for or against the use of such tools. I do not have a strong opinion as to whether the use of such tools should be considered academic dishonesty. Instead, the academic community needs to accept the use of these tools because any attempt to enforce or effectively penalize those using it, will result in a crazed witch hunt, which will inevitably target and persecute the wrong people.

Comments

Popular posts from this blog

Another "Fappening" on the Horizon?

So in case you aren't fully up-to-speed on useless hacker trivia, "The Fappening" (also sometimes referred to as "Celebgate") was a series of targeted end-user cyber attacks which occurred back in 2014 (which strangely feels like forever in tech years), that resulted in unauthorized access to the iCloud accounts of several prominent celebrity figures.  Following these breaches, photographs (for many including personal sexually explicit or nude photos) of the celebrities were then publicly released online.  Most evidence points to the attack vector being spear phishing email attacks which directed the victims to a fake icloud login site, and then collected the victim's credentials to subsequently access their real icloud accounts. Migration to MFA In response to these events, Apple has made iCloud one of the very few social web services that implements compulsory MFA ("Multi-Factor Authentication").  But while they might be ahead of the indust...

Bypassing CAPTCHA with Visually-Impaired Robots

As many of you have probably noticed, we rely heavily on bot automation for a lot of the testing that we do at Sociosploit.  And occasionally, we run into sites that leverage CAPTCHA ("Completely Automated Public Turing Test To Tell Computers and Humans Apart") controls to prevent bot automation.   Even if you aren't familiar with the name, you've likely encountered these before. While there are some other vendors who develop CAPTCHAs, Google is currently the leader in CAPTCHA technology.  They currently support 2 products (reCAPTCHA v2 and v3).  As v3 natively only functions as a detective control, I focused my efforts more on identifying ways to possibly bypass reCAPTCHA v2 (which functions more as a preventative control). How reCAPTCHA v2 Works reCAPTCHA v2 starts with a simple checkbox, and evaluates the behavior of the user when clicking it.  While I haven't dissected the underlying operations, I assume this part of the test likely makes determ...

Building Bots with Mechanize and Selenium

The Sociosploit team conducts much of its research into the exploitation of social media using custom built bots. On occasion, the team will use public APIs (Application Programming Interfaces), but more often than not, these do not provide the same level of exploitative capabilities that could be achieved through browser automation. So to achieve this end, the Sociosploit team primarily uses a combination of two different Python libraries for building web bots for research. Each of the libraries have their own advantages and disadvantages. These libraries include: Mechanize Pros: Very lightweight, portable, and requires minimal resources Easy to initially configure and install Cuts down on superfluous requests (due to absense of JavaScript) Cons: Does not handle JavaScript or client-side functionality Troubleshooting is done exclusively in text Selenium Pros: Operations are executed in browser, making JavaScript rendering and manipulation easy Visibility of browse...