Last year, my regular co-authors – John Morris (Auburn), Marty Mayer (UNC Pembroke), Rob Kenter (Center for Policing Equity) – published a little book on policymaking. The book took a year or so to gestate, as books will, but the dictates of academia also point to a real advantage of doing one: books spawn articles and articles spawn fame and glory for you.
Sure enough, after the book hit the stands, we jumped into extending, updating and expanding its reach. We recently presented one of these at the Southern Political Science Association’s annual meetings and another is in the works. To pick up sorely needed extra fuel, we added another author, Joe Aistrup (also Auburn) to the mix. We were arguing, editing each other, slashing away – having a fine, messy, riotous time of it. But it’s exhausting – especially when you are also teaching a full class schedule.
I’ve never used ChatGPT. It’s an Artificial Intelligence (AI) program that allegedly produces first order essays when you plug in a few terms. It does your writing for you, if you are too busy (or lazy) to do it yourself. The painful hours and days and Zoom meetings and conferencing of the last year make me wonder if perhaps we could have saved all this trouble and simply had a computer grind this all out in a few seconds?
AI has hit the big time, with companies investing like crazy, stock going up and faculty all over the nation in a panic not seen since journals went digital. The fear, of course, is cheating – that students will make a beeline to the apps, dump their homework into an algorithm, and watch Netflix until its done.
Lawrence Shapiro, writing in the Washington Post, suggested that perhaps since AI is essentially a tool, perhaps it should be explored as such. “Given that chatbots are not going to fade away, my students might as well learn how to refine their products for whatever uses the future holds,” he wrote. “… let’s devise ways to make chatbots work for all of us”.
The problem is that intelligent as these apps may be, they are artificial and lack the one thing absolutely necessary in any sort of academic writing: curiosity. They’re robots, not people, and they do what they’re asked to. But they do not have consciousness and therefore no inquisitiveness. This rules them out as a threat in Political Science, I think.
Anyone who observes American politics with any kind of a keen eye and is not the least bit curious as to why things work is a machine. Computers cannot have imagination – and believe me, understanding the political ways of the USA requires that in big doses.
AI is certainly capable of writing – but it’s what’s written that the biggest tell. It cannot hypothesize without human help, nor can it weigh in on data without instigation and directions. It seems to be most successful when given a clear prompt to consult and compare several established ideas. It fails, however, at coming up with anything truly original of its own. Coming up with a new illumination as to why Santos was elected, or a new explanation of what got Napoleon out of bed in the morning are safely beyond the reach of AI – when it is called away from the already known or already suggested, AI fails. Solving puzzles is easy for a chip-mind – creating them is another matter.As long as students are learning policy analysis and seeking imaginative, creative solutions to the erratic human problems of real people, I think we’re clear of the AI threat.
When it comes to politics, however, slaving a computer to their homework is as useless as employing a Roomba.
Bruce Anderson is the Dr. Sarah D. and L. Kirk McKay Jr. Endowed Chair in American History, Government, and Civics and Miller Distinguished Professor of Political Science at Florida Southern College. He is also a columnist for The Ledger.