Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong?

Professors are using ChatGPT detector tools to accuse students of cheating.  But what if the software is wrong?

When William Quarterman signed on to his student web portal to check the results of his history exam, he was shocked to see a cheating accusation from his professor attached to it.

His professor had used artificial intelligence detection software, including one called GPTZero, after noticing his exam answers “(bore) little resemblance to the questions” to detect whether the college senior had tapped artificial intelligence to give his take-home midterm exam a boost, according to school records provided to USA TODAY by Quarterman.

The professor was right, according to the software.

She gave him a failing grade and a referral to the University of California, Davis’ Office of Student Support and Judicial Affairs for academic dishonesty.

Quarterman denied he had any help from AI but was asked to speak with the university’s honor court in an experience he said caused him to have “full-blown panic attacks.” He eventually was cleared of the accusations.

A student on the Indiana University campus in October 2021 in Bloomington.

A student on the Indiana University campus in October 2021 in Bloomington.

Higher education officials across the nation are struggling to address how to uncover cheating and avoid making false accusations of cheating as students more frequently use AI for their assignments and AI-driven detection software proliferates.

Many companies developing plagiarism detection software claim they can detect when students use AI to complete coursework while considering that they are sometimes incorrect.

Education technology experts said educators should be cautious of the quickly evolving nature of cheating detection software.

Universities should steer clear of elevating these cases to disciplinary action right now, said Richard Culatta, CEO of the International Society for Technology in Education. Instead, educators can ask a student to show their work before accusing someone of using AI for an assignment.

“If universities think they’re going to try to catch it in the act, they’re going to be overwhelmed with mediation,” he said. “But we should implement guidelines: How are we citing information that comes from AI?”

Duke student fans cheer before an NCAA college basketball game against North Carolina on Feb.  4.

Duke student fans cheer before an NCAA college basketball game against North Carolina on Feb. 4.

In another publicized case, a Washington Post technology columnist found that Turnitin’s new AI detection tool falsely found several papers written by California high schoolers to be fabricated.

Melissa Lutz Blouin, a UC Davis spokeswoman, said school officials are helping professors “understand how AI tools can support student learning, as well as their potential misuse.” Stacy Fahrenthold, the professor who questioned Quarterman, declined to comment on the case, claiming student privacy laws.

ChatGPT in the classroom: Here’s what teachers and students are saying

A student accused of cheating, then found innocent

To appeal his professor’s accusations to university officials, Quarterman shared a Google document history of his exam writing that showed proof he didn’t use AI and a slew of research on the fallibility of GPTZero and other AI detection tools, according to school records.

In a letter March 16 to the university appealing to the professor’s accusation – provided to USA TODAY by Quarterman’s father – the student said that in his professor’s feedback on his exam, Fahrenthold wrote in late February: “William, unfortunately it appears as though this exam is plagiarized. The answer to Q3, in particular, is drawn from ChatGPT or similar AI software, and accordingly, drawn from a variety of internet sources without attribution or citation. The consequences for submitting plagiarized work in this course is a grade of 0/20 , and a citation to OSSJA for the issue of academic integrity.”

About a month after the accusations, on March 24, the university dropped its case against Quarterman. In a separate letter provided to USA TODAY by Quarterman’s father, Marilyn Derby, an associate director with the University’s Office of Student Support and Judicial Affairs, wrote to Quarterman: “After talking with you, talking with your instructor, and doing my own research into indicators of AI-generated text, I believe you most likely wrote the text you submitted for your midterm. In fact, we have no reliable evidence to the contrary.”

Derby said the university is reviewing several reports of a similar kind.

“At the time your instructor submitted the report, we were just beginning our learning process of how to differentiate AI-generated text from human-generated text. We had a number of professors who submitted reports based on the output of GPTZero. As we learned of the fallibility of these tools, we shared information with instructors,” she wrote. “It’s clear that it will be an ongoing challenge to stay current with the implications of this technology.”

The university is advising professors to use “a variety of tools, along with our own analysis of the student’s work, to reach a preponderance of evidence rather than relying on a single tool,” and the school will be evaluating Turnitin AI detection’s reliability, said Lutz Blouin, the UC Davis spokeswoman.

How reliable is AI detection?

Quarterman and his family have become activists against schools using AI detection to find cheaters. Quarterman’s sister has compiled a hefty database of written works passed through several AI detection platforms, many of which show false positives.

“Obviously, there’s a broader issue here,” said his father, John Quarterman.

The creators behind AI detection tools developed by companies including OpenAI, Turnitin and GPTZero have warned educators about possible inaccuracies of the software.

“We really don’t want anyone making definitive academic decisions out of our detector,” said Edward Tian, ​​creator of the AI ​​detection tool GPTZero. “The nature of AI-generated content is changing constantly.”

Tian said GPTZero is pivoting from its former artificial intelligence detection model, and its next version will not be detecting AI “but highlighting what’s most human.”

The logo for OpenAI, the maker of ChatGPT

The logo for OpenAI, the maker of ChatGPT

OpenAI also warned users its AI detection tool isn’t fully reliable when the company first released it in late January.

Turnitin Chief Product Officer Annie Chechitelli also advised the 10,700 colleges and universities and 2.1 million educators who have access to the company’s AI writing detection capability through existing licenses to be aware of its drawbacks.

Because the rate of false positives is not zero, educators should use “their professional judgment, knowledge of your students, and the specific context surrounding the assignment” before outright accusing a student of cheating, Chechitelli wrote in a blog post in March.

OpenAI launched a second tool: To complement ChatGPT – and help teachers detect cheating

Should professors and students be worried?

Education technology experts are adamant that schools need to embrace AI. And educators should learn to work with it rather than banning or fearing it, especially as it quickly evolves, they told USA TODAY.

Not your parents’ Google: Why universities should embrace, not fear, ChatGPT and AI

Many schools already have codes of conduct that prohibit cheating and plagiarism. Another way schools can create further boundaries around the use of AI is by creating digital contracts for students to sign so they are aware of proper use, Culatta said.

And, Culatta said, schools and educators could develop policies around citing AI when appropriate, make assessments more rigorous to avoid the possibilities of plagiarism, and determine the right questions to ask when a student is suspected of cheating.

‘This shouldn’t be a surprise’: The education community shares mixed reactions to ChatGPT

Looking back, Quarterman said, students should use Google documents to write their assignments like he did, or another word processor that tracks written work for evidence in case they end up in the same situation.

“Be ready to be accused of and being fingered as using AI.”

Contact Kayla Jimenez at [email protected]. Follow her on Twitter at @kaylajjimenez.

This article originally appeared on USA TODAY: How AI detection tool spawned a false cheating case at UC Davis