Schools should make pupils do some of their coursework “in class under direct supervision” amid cheating fears after the emergence of artificial intelligence (AI) systems, exam boards have said.
The Joint Council for Qualifications (JCQ), which represents the UK’s major exam boards, has published guidance for teachers and assessors on “protecting the integrity of qualifications” in the context of AI use.
Schools should make pupils aware of the risks of using AI and the possible consequences of using it “inappropriately” in assessment, the guidance says.
It adds: “Students who misuse AI such that the work they submit for assessment is not their own will have committed malpractice, in accordance with JCQ regulations, and may attract severe sanctions.”
ChatGPT is a form of generative AI which has come to prominence in recent months after a version was released to the public last year.
Schools and colleges may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI— Department for Education document
It can respond to questions in a human-like manner and understand the context of follow-up queries, much like in human conversations, as well as being able to compose essays if asked – sparking fears it could be used by pupils and students to complete assignments.
The guidance for teachers and assessors, which highlights a number of AI chatbots including ChatGPT, suggests “allocating time for sufficient portions of work to be done in class under direct supervision to allow the teacher to authenticate each student’s whole work with confidence”.
It comes as MPs on the Commons Science and Technology Committee were warned of the risks that ChatGPT poses to the assessment system.
Daisy Christodoulou, director of education at No More Marking, told MPs on Wednesday: “I do think that ChatGPT has huge implications for continuous assessment and coursework. I think it’s very hard to see how that continues.”
She added: “It is capable of producing original, very hard to detect, relatively high quality responses to any kind of question. They won’t be perfect, but they’ll be good enough for a lot of students.
“I think that means that uncontrolled assessments – where you’re not sure how the student has produced that work – become very, very problematic.
“I do think we have to be thinking that there is a value in having those traditional exams.”
It's very good at basically cheating, giving students essays that are not their own— Assessment expert Daisy Christodoulou
It comes after Ofqual’s chief regulator Jo Saxton said ChatGPT has made traditional examined conditions “more important than ever”.
Speaking to headteachers at a conference earlier this month, Dr Saxton said she would make pupils complete their coursework and essays under exam conditions if she was a school leader following the emergence of AI systems.
Ms Christodoulou, who used to be head of assessment at Ark Schools, added that she heard of teachers using ChatGPT to create questions for students at the end of lessons.
She warned MPs: “This is where it gets more problematic. The more open you make it and the more you give it more latitude, it does make a lot of errors, it makes an awful lot of errors.”
On ChatGPT, Ms Christodoulou said: “It’s very good at basically cheating, giving students essays that are not their own. It is very good at that and it can do that at scale and rapidly.
“For the more socially useful things in education, such as creating questions and marking, it’s not as good as we’d hoped.”
It comes as Education Secretary Gillian Keegan has said that AI has the “power to transform” teachers’ day-to-day work.
Speaking at the Bett show in London on Wednesday, Ms Keegan suggested the use of AI in education could significantly reduce tasks that “drain teachers’ time” – such as lesson planning and marking.
A document setting out the Department for Education’s (DfE) position on the use of generative AI – including ChatGPT or Google Bard – in the education sector calls on schools, colleges and universities to continue “to take reasonable steps where applicable to prevent malpractice”.
It adds: “Schools and colleges may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI.”