Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)
A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.
Inspired by and forked from kettle11's world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil's advocacy and 2. give me more specifics.
Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)
Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand. It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.
So much edtech marketing tries to sell the idea of “engagement”; I've written before about why I find that phrase so pernicious. While I'm still bothered by the way that selling “engagement” through technology makes it seem like what teachers do is inherently not engaging (e.g. “boring” lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer's story, that technology's goal is “engagement”, comes from the way such framing distracts from the more valuable — and undervalued — part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.
A good friend of mine admitted that he was a pretty piss-poor teacher on zoom. He is, in the classroom, an excellent teacher, in no small part due to a charismatic persona which slides from serious to amused and from hard to soft with ease. It would be easy to imagine that he's just being tough on himself, but I think he's actually kind of right. He's not great on Zoom. Something about his instincts and his habits don't translate quite right and his inability to sense the physical cues of students distracts and frustrates him.
There is some sort of mismatch there or difficulty in translating teaching persona through the screen.