By Evelyn Lucado
Opinion Editor
The new and unprecedented world of artificial intelligence is rapidly changing the landscape of higher education, with both professors and students at a standstill.
On one hand, the use of AI has led to drastically increased incidents of academic dishonesty. On the other hand, the AI detectors used by some professors in their attempts to fight dishonesty are not reliable and often give false positives.
While there are avenues for AI to be beneficial in the classroom, first there must be balance and communication between students and professors regarding their use of AI.
According to The Chronicle of Higher Education, many colleges are struggling to find solutions to this problem. While some professors have adopted AI into their curriculum and classroom, others have not.
AI has been around for a long time, but generative AI and language learning models such as ChatGPT and Grammarly have changed the game and disrupted classrooms.
“When I used to read good writing like that, my heart would leap with joy. The students are getting it! I’d think. Now my heart sinks because I know those sentences/paragraphs/whole essays are probably computer-generated,” professor Lisa Liberman from The Chronicle of Higher Education said.
Relying on AI to finish assignments undermines the purpose of attending class. When students turn to AI for their work, they forfeit the opportunity to engage in the learning process and miss out on valuable educational experiences.
Utilizing AI for assignments also means that students lose out on essential skill-building and critical thinking, which are vital components that would typically be developed through the assignment’s completion.
Some professors have turned to AI detection software in an attempt to combat dishonesty. However, these programs are not always reliable and run the risk of false positives. An accusation of academic dishonesty holds the potential to be detrimental to a student’s college career.
As explored in a recent article from The Washington Post, students have become fearful of a false positive. When students cheat using AI, there is no primary document to point to for evidence of dishonesty and plagiarism, which makes proving guilt or innocence difficult.
Inevitably, there will be students who cheat using AI and false positives for students who do not. Even so, the risk of these accusations can be diminished with honesty, open communication, and proper use of AI detection software.
According to Turnitin, the rate of false positives given by their AI detection software is less than one percent. However, even a small margin of error leaves room for innocent students to be punished when they did nothing wrong.
As written on Turnitin’s official website, “We’d like to emphasize that Turnitin does not make a determination of misconduct even in the space of text similarity; rather, we provide data for educators to make an informed decision based on their academic and institutional policies.”
As AI detection scores are not infallible, they should be treated as an indication of possible dishonesty rather than a verdict.
When professors using AI detection software are aware and honest regarding the possibility of false positives, it opens the doors for discussion and trust with students.
In turn, students need to be honest about their use of AI. Students should follow their professor’s policies regarding the use of AI in the completion of coursework.
According to the Washington Post, there are ways to combat a false accusation. Writing software such as Microsoft Word and Google Docs have version history functions that keep track of changes made to the file, which can help prove the originality of an assignment.
Both professors and students share the responsibility of creating open and honest lines of communication. Adjusting to the new world of AI in the classroom means building trust, and using AI in accordance with university policies.
Photo courtesy of Wikimedia Commons.
Photo Caption: AI is rapidly changing the world of education.