Schools are “bewildered” by advances in AI and do not trust the companies behind the tech to provide adequate regulation, headteachers have warned.
Leading figures from the UK’s education sector said systems like OpenAI’s ChatGPT and Google’s Bard were developing “far too quickly” and guidance on how classrooms should adapt wasn’t keeping up.
In a letter to The Times newspaper, with more than 60 signatures, education figures said ministers have not proved “capable or willing” to provide the “guidance and counsel” they need.
They wrote: “We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools.
“Neither in the past has government shown itself capable or willing to do so.”
They added: “The truth is that AI is moving far too quickly for government or parliament alone to provide the real-time advice that schools need.”
Please use Chrome browser for a more accessible video player
2:16
Will this chatbot replace humans?
The headteachers behind the letter, led by Epsom College’s Sir Anthony Seldon, said they plan to set up their own “cross-sector body” of teachers from their schools, guided by digital and AI experts, to provide advice on which AI developments could be beneficial or damaging.
They would work to ensure systems like ChatGPT work in the interests of pupils, rather than tech companies.
Some workplaces, schools, and universities in other countries have already banned generative AI like ChatGPT.
While they have wowed with their ability to pass exams, fix computer bugs, and write speeches, they have also been shown capable of generating incorrect or offensive answers.
The letter in The Times comes after AI pioneer Professor Stuart Russell warned “the stakes couldn’t be higher” as governments grapple with how best to approach regulation.
He said: “How do you maintain power over entities more powerful than you – forever?”
“If you don’t have an answer, then stop doing the research. It’s as simple as that.”