It's popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely “oracular AI”. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don't think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.
The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.
Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)
A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.
Inspired by and forked from kettle11's world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil's advocacy and 2. give me more specifics.
Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)
A recent opinion piece in WaPo by journalist Markham Heid tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I've seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I've seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn't have to be and probably shouldn't be solely transactional. It should be dialogic, and as such, should always involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.
Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand. It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.
So much edtech marketing tries to sell the idea of “engagement”; I've written before about why I find that phrase so pernicious. While I'm still bothered by the way that selling “engagement” through technology makes it seem like what teachers do is inherently not engaging (e.g. “boring” lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer's story, that technology's goal is “engagement”, comes from the way such framing distracts from the more valuable — and undervalued — part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.
We need more forgetful educational technologies. The default mode is always record and preserve first, deal with data issues after that. Privacy policies are not sufficient. We need intentional forgetting in edtech. Here's why.
I started writing this blog, about 6+ months ago, when I was headed in a professional direction that was a bit different than it is now. Let's say that my worldview was a bit more open source-ish and not particularly commercial or profit-minded. Since then I've moved into greater contact with the business of edtech, so to speak. One useful feature of writing in the current format and under the current heading of “minimalist” edtech is that it's given me a chance to think through the tension between my teacher brain, which tends to want to serve students and teachers, and the reality of various edtech business models and trends. I don't mean to imply that edtech companies are bad actors in relation to some sort of pedagogical purity that only teachers possess; it's not that at all. But there is a tension there, a difference in what stakeholders may value or may find compelling.
More specifically, if asked, “what's the value prop for x edtech product or y technology”, how far apart would teacher brain and business brain be?