All the chatter has been around AI for some time now, but I'm thinking a lot about first principles, in part by way of AI.
AI tools have proliferated over the past year and the advice trends towards embracing the maximalism of it all. It is an age of new tools and new features being released every week. This puts us into a reactive mode, feeling like we need to keep up with the pace of everything hitting the airwaves. What feature did OpenAI release this week? What new tool is everyone talking about?
All this puts us into a tool-searching mode, asking about features and capabilities and making it easy to lose sight of goals and process.
Meaning: what's missing from the noise around generative AI
I hate hype cycles in tech. They are just noise and distraction full of over-promise and under-delivery. Conventional wisdom coalesces around poles of evangelism or sober warning and a whole lot of middle ground goes untrodden. Sometimes the waning of the hype cycle makes space for more pragmatic talk, but not always. And in the case of generative AI, it seems that the major players and an enabling VC ecosystem have no interest in letting the hype cycle ebb. They would rather leave us all in a state of existential angst while they disrupt towards the inevitable rise of their version of AI tuned to funneling profits down their ever capacious maw. (It is remarkable to me how quickly OpenAI has turned its founding mission to build AI “alignment” into a cloak of plausible deniability around making ClosedAI for financial world domination, but that is a topic for another time...)
It's popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely “oracular AI”. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don't think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.
The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.
Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)
A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.
Inspired by and forked from kettle11's world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil's advocacy and 2. give me more specifics.
Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)
This BBC piece about the origins of the de-cluttered household caught my eye: https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home
It's a swift and effective overview of architectural minimalism and the cyclical waxing and waning of fashion for de-cluttered interiors. The pendulum has swung towards maximalism and eclecticism for a bit now and perhaps there are hints that is starting to swing back. I suspect the article presents too linear a summary, as there seem always holdouts that can linger on until suddenly becoming “in” again as the pendulum swings back. But this piece got me thinking about how much minimalism is cyclical in other areas outside its home base of architecture and design.
A recent opinion piece in WaPo by journalist Markham Heid tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I've seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I've seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn't have to be and probably shouldn't be solely transactional. It should be dialogic, and as such, should always involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.
My new year's resolution: more writing. Because otherwise the bots win. Or, rather, otherwise the bots won't have enough fodder to generate ways for students to cheat? Not sure, but I think I need to practice writing like a human.
Apparently there's been a lot happening on the AI[^1] front that kind of got people talking these past few months. In predictable fashion, some teachers are stoked, others are freaked out, and most aren't quite sure what to do about OpenAI's big reveal that a massive language model can be coaxed to write a passably decent essay with little effort or significant know-how.
Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand. It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.