Humans in the Loop and Agency
Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that's the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)
A key question for any human in the loop systems is that of agency. Who's the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I'm not sure that's the case. And I'm not sure it's always easy to tell the difference.
One obvious reason this is not the case with chatGPT specifically is that OpenAI's interest in making chatGPT available is very different from public perception and adoption. To the public, it's a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI. (For a harsher critique of the Effective Altruism ideology that may be part of OpenAI's corporate DNA, see https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/)
Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it's the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts?
For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. “You are a psychologist and the following is a conversation with a patient”) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier here.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words — of assuming that everything is keywordable — and ways of asking questions.
Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people's existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however.
Learning requires students gain a sense of agency in the world. Effective learning builds off of growing agency, the ability to exercise one's will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one's environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there's more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?
Hence my concern. Human in the loop systems can provide a false sense of agency. Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren't meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like “personalized learning”, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?