Minimalist EdTech

chatgpt

It's popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely “oracular AI”. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don't think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.

Read more...

The garbage pile of generative "AI"

The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.

Read more...

human in the loop, made with DALL-E

Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)

A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.

Read more...

Inspired by and forked from kettle11's world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil's advocacy and 2. give me more specifics.

Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)

Some notes at the bottom.

Read more...

A recent opinion piece in WaPo by journalist Markham Heid tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I've seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I've seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn't have to be and probably shouldn't be solely transactional. It should be dialogic, and as such, should always involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.

Read more...