In my social media feeds recently, I’ve seen debate over the decision by the American Federation of Teachers to partner with Microsoft, OpenAI, and Anthropic to open a National Academy for AI Instruction. The July 8 press release describes the goal as “a national model for AI-integrated curriculum and teaching that puts educators in the driver’s seat.” Many AFT members are upset with what they see as an ideological caving to the hype over an inherently bad technology. I’ve also seen other educators applauding the AFT for taking the initiative in an area where there has been too little active involvement by either of the national teachers unions.
We should not be surprised that either the AFT or NEA created a partnership something like this, as the impulse is drawn from a long history of ed tech: teachers and their organizations should be in control. This is partly but not entirely tied to union activism; the NEA held to that principle before it began to act as a union in the late 1960s. In my archival research on the history of educational broadcasting, there are plenty of newsletter pieces and other written material about the importance of teacher direction and control over instructional television.
With one important exception, with respect to uses of generative AI, this means that the impulse will still be to put control in the hands of teachers. We should not be surprised that there are strong opinions among teachers (and university faculty) about the proper approach to take, including whether or not it is ethical to use it at all in a classroom. If you are a skeptic about both the appropriateness and practical issues in using generative AI — like any educational technology — then you will be teaching next to classrooms where your colleagues both agree and disagree with you. And vice versa.
In addition, where districts or principals take strong positions, there will be teachers who oppose them, on both practical and principled grounds. Where there is heavy-handed imposition of generative AI, one should expect failure on multiple levels.
I see the AFT’s decision in that historical context: It is one plausible interpretation of the dictum, let teachers be in charge.
The issue that I see as an exception to that historical pattern: privacy. In both K-12 and higher education, there should be a much greater awareness of the risks to student privacy, largely because of recent history, but I also don’t expect every school or college and university to set appropriate guidelines.
For the most part, until what some folks call web 2.0, the ability of teachers to make significant classroom decisions about educational technology meant the consequences were limited to classrooms, a ground-level version of educational decentralization that meant decisions about educational technology rarely rose above the district level. (An exception to some extent is television, and in the early era of educational TV, only Alabama created a statewide network. See Larry Cuban and Victoria Cain‘s books for more about teachers’ uses of television for instruction.)
With venture capital-driven start-up culture targeting education in this century, individual teachers and districts were able to quickly make commitments that had long-term consequences for families. In many cases, start-ups recruited individual teachers to try out a service, and either individual teacher or districts adoption could quickly lead to privacy violations and also commitment to a technology whose affordances had some ideas of education baked in. ((I wouldn’t call them philosophies of education, but maybe I’ll approve crypto-philosophy. Pun/reference intended.))
With the head-spinning discussion and decision-making around large language models, districts now have several decades of experience where leaders should know some of the issues to watch, such as the use of student work for training of future models. At my university, there are some models that have contractual prohibitions on using interactions with students and staff for training purposes. This builds on the work of corporations to insist that their proprietary data was also not used for training purposes. However, district and state leadership are likely to be taking widely-varying approaches not only to contracting but to setting guardrails for individual schools and teachers. ((I fully expect that there are thousands of teachers and faculty who have tried to use large language models to do something with student work in ways that violate student privacy. That is a risk inherent in a history of institutions that left considerable de facto decision-making to teachers.))