Should Educators Use AI Detection Tools?
Last week, I read the same answer to an essay question from three students in a row. Well, not exactly the same answer, but darn close. The three answers contained the same language, number of sentences, and general idea. I'm guessing that happened because students used generative artificial intelligence to write these short answers. I added essay questions to all my quizzes to ensure students demonstrated their understanding of the material in their own words. I realize those essay questions no longer achieve my goal, so it's time to rethink.
One option when suspecting that students are using AI to create their work is to use an AI detector tool like Turnitin's AI Content Detector (which currently comes with a subscription to Turnitin) or GPTZero, which was developed to sniff out the use of ChatGPT. This blog post explores whether these detection tools actually solve the problem.
A little recent history in our own backyard is helpful. Last year, Maricopa County Community College District (MCCCD) assembled an Artificial Intelligence (AI) workgroup to provide thought leadership, review current policies, and provide guidance on using AI for MCCCD. This committee delved into the research and provided some Syllabus Statements Regarding Generative Artificial Intelligence (AI) in Spring 2023. This year, the workgroup tackled the topic of AI detection tools and produced an Artificial Intelligence and Academic Integrity presentation that provides guidance for faculty on using AI detection tools.
The guidance offered in this presentation is to use AI detection tools as only one component of examining student work for authenticity, not as the yes or no answer to whether the student used AI in the assessment. AI detection tools aren't perfect, and like cancer screenings or PCR tests, they can produce false positives. False positives happen when the detection tool says the student used generative AI, but they really didn't. AI detectors can only give a statistical likelihood that the text was AI-generated. Also, to make things even more confusing, writing assistance tools like Grammarly are AI and will create content that tests positive for AI use. The research also tells us that false positive rates are higher for non-native English speakers.
Due to these limitations, we must be sure to give students the benefit of the doubt, ask them to describe their process, clarify class expectations, and discuss consequences. It is wise to use AI detection tools as only one data point in assessing student work, and other methods should be used to make a determination before approaching a student. The MCCCD workgroup offers these suggestions for other methods of detecting AI-generated content:
- Compare the work to prior assignments (tone, voice, diction, vocabulary)
- Observe inconsistent font and formatting (indicating inelegant copy and paste)
- Converse with the student about the content to see if they are familiar with the vocabulary or concepts contained in their work.
- Notice stock AI language: “As a large language model, I am not able to form my own opinions. However, I can provide you with information and perspectives from a variety of sources to help you form your own opinion.” (Sometimes students copy this part too!)
Do you have wisdom to share on this topic? Please comment below!
Comments
Post a Comment