When it comes to applying artificial intelligence (AI) to our daily lives, the possibilities can be frighteningly endless. Until legislation is set to regulate its use, AI has as much potential for scams and plagiarism as it does for efficiency. For those concerned that the new tech will become indistinguishable from human-made content, you may be comforted by a new technique teachers are trying to combat ChatGPT and other generative AI.
Duke University PhD student Chris Howell is taking a proactive approach to how his undergrad students interact with artificial intelligence. It's rare you have an instructor require their students to use ChatGPT -- but that's exactly what Howell proposed.
DON'T MISS: High-Profile Investor Shares Blunt Words on the Current State of AI
According to Howell, he asked his students to “generate an essay using a prompt I gave them, and then their job was to “grade it.” The results? “All 63 essays had hallucinated information, fake quotes, fake sources, or real sources misunderstood and mischaracterized.”
But the “biggest takeaway,” Howell says, is that “the students learned that (ChatGPT) isn’t fully reliable. Before (the assignment), many of them were under the impression it was always right … Probably 50% of them were unaware that (ChatGPT) could do this.”
His students’ reaction also raises a lot of concerns for the future. “All of (the students) expressed fears … about mental atrophy and the possibility for misinformation (and) fake news.” Some students' perspectives paint a dark picture of the use of AI without the control of a critical mind.
One student said of the project, "I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is."