Inevitability is a common rhetorical ploy in education. To Thomas Edison, film would inevitably replace textbooks. In the 1960s, B.F. Skinner said that teaching machines were inevitably going to replace teachers. In 2012, massive open online courses (MOOCs) were inevitably going to take over higher education. Pick your initiative, and you can almost always find someone who channels the Borg: resistance is futile.
The Borg trope is wrong for many reasons. Few things are anywhere close to inevitable other than the heat death of the universe billions of years from now, or our personal mortality. Inevitability is often a rhetorical cover for what advocates cannot defend on their merits, especially in education. When someone tells me that resistance is futile, I look for clues to other dynamics: who is bearing the risk, who is actively shifting the potential risks, what might magnify inequalities, and where is the money?
As an historian of education, some of the specifics will resonate with other times and places–and in my independent study for undergraduates, I teach students how to look for what I call issue genealogies and persistent tensions. But you do not have to be an historian or have deep historical training to act like a good journalist and keep in mind a reporter’s 5 W’s: who, what, when, where, why… and how. Keep an eye, an ear, and a nose alert for long cons, include the long cons that credulous peers might be wishing away.
Today, large language models have plenty of Borg-like advocates. I know of a number of niche applications for large language models–generating cases to study is a perfect use of a text-confabulation technology, for example. But the specifics matter, especially with issues such as security risks and how students and teachers are most likely to use a tool. I will let others poke holes at the various financial machinations behind LLM companies. But we can look for bagmen in the vendors trying to sell “AI” to schools and colleges, parents and teachers and researchers and students. And we can look for the Borgias, the modern equivalent of the scheming Italian family, in the charismatic-seeming con artists who seek to become the royal du jour of AI in education.
Where can risks shift? Terms of service, long-term contracts, and shell companies are all ways to shift operating risks from vultures to victims in the business sense. But there are other risks, and the opportunity costs are high to adopting any bleeding-edge tech: with no evaluation of the benefits for most current LLM-based applications, there is often no guarantee that the time you might spend learning a tool would provide any benefit… and often many well-documented benefits to other ways we can spend our time. Platform and company instability is another risk: professional development spent on a specific tool might be wasted if the company evaporates in 12 months.
Fads exist, especially in education technology. Do not believe the Borg-like rhetoric; each of us has the capacity to be skeptical and look for the bagmen and the Borgias.