It was a scenario worthy of a TV sitcom: In making a case to the state board of education for limits on cellphones in Alaska schools, state education commissioner and former Anchorage School District superintendent Deena Bishop leaned heavily on an AI text generator — and failed to remove the fabricated citations it added to support her arguments. If she were a high school student, Bishop would have received an F on the assignment and a stern lecture about doing her own work. Embarrassingly, our top education executive is undereducated on the proper use of AI, and we shouldn’t send our students into the world similarly unequipped.
The AI debacle was doubly unfortunate because it distracted from two more worthwhile discussions that we should be having around education and technology — first, the topic Bishop enlisted AI aid to tackle, limits on cellphones in schools. It’s ironic that the citations hallucinated by Bishop’s AI helper were bogus because there is ample real-world data indicating that limits on cellphones in schools are beneficial to student success and social-emotional well-being. Banning the use of cellphones on school grounds is strongly correlated with higher math scores and is broadly supported by teachers who witness the distracting effects of phones on their pupils. The state board of education shouldn’t let Bishop’s misstep distract it from the serious issue at hand — and the potential to reverse some of the distractions that have crept into the classroom.
The other unfortunate aspect of Bishop’s citation-fabrication faux pas is that it displays a lack of maturity in the ways we use artificial intelligence — even at the highest levels of our government. Although the temptation has been strong, particularly in schools, to levy a blanket ban on the use of AI in schoolwork, this is not a technology that is going away — on the contrary, we must expect it to become more deeply embedded in our day-to-day lives in the years to come.
With that in mind, the solution cannot be to impose some sort of monastic moratorium on the technology, but rather to integrate it thoughtfully into the curriculum and teach students how to use it in a responsible way. In the face of such a game-changing development, the impulse to panic is powerful, and — especially in schools — we’re wary of doing things differently than the way we ourselves were taught. But just as calculators didn’t give rise to students who couldn’t do math, the advent of language and image-generation tools, deployed wisely, won’t result in students being unable to think critically.
It’s incumbent on us, as parents and educators, to work out ways that AI can be a valuable teaching tool rather than a crutch used solely to save time and reduce effort. Consider, as just one example, how students enlisting a chatbot as a partner in a Socratic dialogue about a lesson topic could lead to insights that aren’t otherwise feasible given the constraints of a teacher’s time in a given class period.
The road between where we are now and the point at which AI will be seamlessly integrated into our society will surely be a bumpy one, but it will only be bumpier if we don’t focus on using our technological tools correctly. We should be thoughtful about the ways we employ AI to help us, ensuring that we’re not pawning off our work on it but rather using its abilities to expand our own horizons, synthesize data we might not have otherwise considered, and use its output as a springboard to solve our problems creatively — a valuable human skill.
And, whether the person using AI is a student creating an outline for an essay or an education commissioner looking to brief the state school board on policy, we would be wise to double-check what it tells us, lest we end up embarrassed by our naive reliance that the friendly machine spitting out suggestions would never lead us astray. After all, who among us has never been told by our GPS driving assistant to turn down a road that didn’t exist?