Last November, when ChatGPT launched, many schools felt like they had been hit by an asteroid.
In the middle of an academic year, without warning, teachers are forced to confront a new, alien-looking technology that allows students to write college-level essays, solve complex problems, and pass standardized tests.
Some schools responded — insanely, I argued at the time — by banning ChatGPT and similar tools. But those limits don’t work because students can use the tools at home on their cell phones and computers. And, as the school year progresses, more centers are developing artificial intelligence — a category that includes tools like ChatGPT, Bing, Bard, and others — to name a few. They withdrew Cleverly its barriers.
So far this school year, I’ve spoken with many K-12 teachers, school administrators, and university faculty members about their views on AI today. There is a lot of confusion and panic, but also a lot of curiosity and excitement. Above all, teachers want to know: How can we use this material to help students? to knowInstead of trying to catch them cheating?
I’m a tech columnist, not a teacher, and I don’t have all the answers, especially when it comes to the long-term effects of AI in education. But I can offer some basic short-term advice to schools trying to figure out how to deal with AI this semester.
First, I encourage teachers, especially in high schools and universities, to assume that 100 percent of students are using ChatGPT and other generative AI tools in every assignment and lesson, if not monitored inside the school building.
That may not be entirely true in most centers. Some students don’t use AI because they have ethical concerns about it, because it doesn’t apply to their specific tasks, because they don’t have access to the tools, or because they fear getting caught.
However, the assumption that everyone is using AI outside of the classroom may be closer to the truth than many teachers realize. (“You Don’t Know How Much We Use ChatGBT,” read the headline A recent article from a Columbia University student in the Chronicle of Higher Education). And it’s a useful shortcut for teachers trying to figure out how to adapt their teaching methods. Why assign a take-home test or paper? Jane Eyre What if everyone in the class—except perhaps the strict rule-followers—used AI to do just that? If you know that ChatGPT is as ubiquitous as Instagram and Snapchat among your students, why wouldn’t you switch to class tests, long-response essays, and group assignments?
Second, schools should stop relying on AI detection programs to catch cheaters. There are dozens of these tools on the market now, all of which claim to detect AI-generated text None of them worked Very reliably. They make many A false positive And they are easily fooled by techniques like paraphrasing. You don’t believe me? Just ask OpenAI, the creator of this year’s ChatGPT It has discontinued its AI handwriting recognition tool Because of its „low accuracy rate”.
In the future AI companies will be able to label the results of their models for easier detection—a practice known as “watermarking”—or better AI diagnostic tools will emerge. But for now, most AI text should be considered undetectable and schools should invest their time (and technology budgets) elsewhere.
My third piece of advice — and one that I get a lot of emails from angry people — is that teachers should focus less on alerting students to the shortcomings of creative AI than on discovering that the technology works well.
Last year, many schools tried to scare away students by saying that tools like ChatGPT were unreliable, often gave meaningless answers and produced generic prose. These criticisms, while true for early AI chatbots, are not so true for updated models, and smart learners are figuring out how to get better results by giving the models more sophisticated instructions.
As a result, students in many schools are ahead of their instructors in understanding what generative AI can do if used correctly. Warnings about flawed AI systems issued last year may ring hollow this year, now that GPT-4 is capable of getting it Graduated from Harvard.
Alex Codran, executive director of the AI Education Project, a nonprofit that helps schools adopt AI, told me that teachers should use the time they create to appreciate how effective and how fast it is.
„For most people, ChatGPT is still a gimmick,” he said. „If you don’t really appreciate how deep this tool is, you’re not going to take all the other steps that are necessary.”
There are resources for educators who want to get up to speed on AI in a hurry. Coderon system, like International Association for Technology in EducationProvides a series for teachers Lesson plans AI focused. Some teachers have also started collecting referrals for their colleagues Website Provides practical advice on AI developed by Gettysburg College academics.
However, in my experience, there is no substitute for hands-on experience. That’s why I suggest teachers start experimenting with ChatGPT and other generative AI tools themselves, with the goal of making their students as tech-savvy as they are.
My final advice to schools intimidated by the AI being developed is this: Treat this year—the first full academic year post-ChatGPT—as a learning experience, and don’t expect to get everything right.
There are many ways AI can help reshape classrooms. Ethan Molick, a professor at the University of Pennsylvania’s Wharton School, believes technology will lead more teachers to adopt a „shrinked classroom” model (where students learn outside of class and practice in class). AI is more resistant to cheats. Other teachers I spoke with said they were experimenting with making generative AI collaborative in the classroom or creating a way for students to practice their skills at home with the help of a personalized AI tutor.
Some of these tests don’t work. Others are yes. No problem. We’re still getting our hands on this strange new technology, and expect a few hiccups from time to time.
However, students need guidance when developing AI, and schools that treat it as a fad or an enemy to be defeated are missing an opportunity to help them.
„A lot of things will get in the way,” Molick said. „That’s why we have to decide what we’re doing instead of pushing back against AI.”
Kevin Rouse is a technology columnist and author Future Proof: 9 Rules for Humans in the Age of Automation More about Kevin Roos