Meaning: what's missing from the noise around generative AI
I hate hype cycles in tech. They are just noise and distraction full of over-promise and under-delivery. Conventional wisdom coalesces around poles of evangelism or sober warning and a whole lot of middle ground goes untrodden. Sometimes the waning of the hype cycle makes space for more pragmatic talk, but not always. And in the case of generative AI, it seems that the major players and an enabling VC ecosystem have no interest in letting the hype cycle ebb. They would rather leave us all in a state of existential angst while they disrupt towards the inevitable rise of their version of AI tuned to funneling profits down their ever capacious maw. (It is remarkable to me how quickly OpenAI has turned its founding mission to build AI “alignment” into a cloak of plausible deniability around making ClosedAI for financial world domination, but that is a topic for another time...)
Most hype cycles (looking at you web3) are all noise and little substance. There's more substance here, as LLMs and generative tools are genuinely powerful technologies that demonstrate interesting results and open up many avenues for further exploration. They are also inherently problematic, from first principles to economics of production and modes of consumption, as Bender et al demonstrate repeatedly (see, e.g. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html or, today, Stochastic Parrots Day). My fear is always that even though the criticisms are justified and completely correct, it won't matter because the pushers of this technology have already flooded the zone. The tech is good enough in enough cases to get itself slathered onto any product where text or image lives right now, a sloppy veneer of some capability to do .... something, even if that something is just to make currentGPT-x collide with my-own-stuff™ in a way that feels different and new and exciting. This is after all how hype cycles and tech evangelism works. The existing thing is a quick graft; but then the chatter frames it as something transformational. It will change education for ever! Personalized tutors! Personalized quizzes! Train teachers with fake generated students! And then we'll have democratized education! And it will be good and everyone will learn in ways they haven't before because we all know the problem. The problem is teaching is inefficient. Optimize! AIs will imitate and then replace and the children will smile and be brilliant. And the gods will rest...
(Side note from Stochastic Parrots Day in semi-real time: Mark Riedl just made a similar point about how the technology of LLMs is inherently backward-looking but the marketing and news framing around them now is forward-looking, characterizing everything in terms of intent or human-like mental capacities.)
Hype cycles are wish cycles. In tech they exploit that distance between the technical promise and the demonstration of current capability. The current product only needs to do some seed of something amazing and then the hype vision drags our gaze along lines which we humans then color in with vivid imagining of possibility around perceived problems. Down the road, if any portion of that imagining that comes true, it is deemed a success and thus doubters or dissenters are discredited. After all, these things are coming, we are told, it's only a matter of time. That is almost certainly true. The problem is just that little detail of what the “thing” is and when that thing shows up. Neither of those details is clear, as the eventual form and consequences of tech will likely be very different from the initial promise, and the delivery date will be both sooner (for some components) and later or never (for other components).
I find myself frustrated with the current hype cycle most because, more than any other technological hype cycle I've seen in decades, it seems based on nothing. Not nothing in terms of tech, but a basic assumption of meaninglessness. Nihilism lurks sleepily underneath every breathless account of the long-term promise, and at every pooh-poohing of dissent.
Let's start with the obvious good. Language models are fantastic at things like coding or generating boilerplate prose. Using GPT4 to help get new code for largely solved problems is a game changer. But that makes sense, since so much work in that area already involves applying existing patterns and best practices to slightly new contexts. Code is far more boilerplate than people who don't regularly work with software development might realize. It is also exactly the kind of activity where humans doing it is not necessarily a sensible arrangement. We don't think in code, so it's always been a slightly strange thing that the ability to think like a machine should be an asset in a field of human work. Similarly, whether it's so-called “bullshit” knowledge work or tasks that involve lots of thankless boilerplate work, it's not surprising that GPT4 is great at that sort of thing. It can be good enough for a huge range of tasks where speeding up an output driven part matters. Even for writing, I think it's important to distinguish why people do certain kinds of writing. Just as I see coding as a task that is interesting but which I'm happy to speed up towards an end goal, many many people deal with the written word or images or video or sound as a work product which is entirely telic, entirely transactional in their engagement. That doesn't mean it can't be or isn't enjoyable to work with, or that there isn't value learning those things by struggling through things which a quick answer from a LLM bypasses. But there's no inherent value in that apart from our perception of value.
The fact that so-called generative “AI” has pros and cons in its application should be no surprise. Every technology follows that pattern, neither good nor bad nor neutral even, but rather an amplifier of goods and bads at every level from how it is put together to how it is used and, as technologies shift from active to legacy, how those technologies are remembered and reused for different ends. Popular discussions of technology change often highlight the fact that all technological shifts have faced initial resistance and fear that later gave way to acceptance. That's a neat narrative which happens to be wrong. In every case the use case for the technology became clearer and more focused and what actually happens is that value and meaning of existing technologies get renegotiated in light of emerging technology. I expect the same thing to happen here, not the neat replacement narrative that Silicon Valley disruptors favor, where they must make the market pitch to investors that their cool tool will profitably supplant some existing inefficiency or inferior thing, but rather the messier negotiation of meaning.
Meaning is something we need to discuss more. It seems like the thing that we should discuss anyway. What matters? What doesn't? Why do we value this or that? Political posturing has taken the place of too much actual discussion of meaning. Surface hot takes, influencer culture— all crap fast food in place of thought. Hype cycles are built on that thin layer of discourse, where surface and speed is all that matters. It's not that there aren't voices making clear the pitfalls along with the promise of technology. It's that the rest of us don't have a second to think about what any of it means. But also things aren't usually framed for us in terms of meaning where we have an active stake. We're told that AI will revolutionize this or that area, that everything has to change, that we need to rethink. But that's framed as a technological reactionism, particularly for educators. Redo your assignments, rework your expectations and guidelines for students, integrate some critical thought about AI into your class on x, y, or z, however tangential to that area.
Hype cycles like to rob us of agency. Change is inevitable, inevitably such change is progress, and progress risks leaving those who don't progress behind. But that's the whole point right? That's what no large language model can do now or anytime soon? Agency is a tougher nut to crack, no matter what simulacra of agentive behavior emerges from LLMs. (Footnote for the curious— I don't think that the Waluigi effect or other phenomena that point at agentive behavior are anything more than imitations of the traces of agentive behavior that are latent in human textual communication. Far from being an indicator of AGI, these are indicators of the distorted myopia of these models being trained on data which is a poor shadow of both human thought and human action.)
Hype cycles similarly remove our capacity to find meaning. Speed and urgency and suddenness leave us scrambling, wondering what to do about all this. It is not just that some knowledge workers in particular may feel despair at thinking that what they are good at is not going to be valued. It has been coming for a long time, but the idea that lawyers and doctors may have core functions (e.g. document generation, identifying tumors from scans) that can be better done by machines is likely unsettling at first. But it makes sense when we think about the range of tasks that humans in these fields are asked to learn through long training. Connoisseur-like tasks around data are prime targets for deep learning and narrow AI. The question shouldn't be about what is disrupted or lost but rather, first, what meaning was there in such tasks to start? Why was document-making or specialized legal language important as a skill set? Is there anything meaningfully human about that specific task? I would submit that the knowledge skill itself is meaningful only within the context where groups of people decided that it was relatively hard for many people to do it, that it required significant investment of societal capital, and then consequently that we got used to the idea of economic and class security built on that marker.
One of the best books knowledge workers might read right now will seem out of left field. Mary Carruthers' The Book of Memory: A Study of Memory in Medieval Culture is a helpful dose of perspective. It illustrates something that is easy to forget now, namely that intelligence among medieval smarties was not measured as we might measure it today. Good memory, which in recent history is mostly a side skill which can be supplemented by no end of paper and computer and external aids, was tightly connected to one's scholarly and intellectual self. We are on the other side of that cultural world, but medieval memory is a timely reminder of shifting values around cognition and knowledge. Advances in AI will likely trigger shifts in intellectual value akin to what we see between the meaning we make of knowledge work today vs. the kinds of knowledge work that were prized in the middle ages. It's a huge shift but also not something that fundamentally breaks everything so much as re-aligns values around activities.
We get to decide what technology means in the context of what we do. As a (sometimes) developer, I find much of the coding process of boilerplate tedious without huge gains. An LLM is a huge asset, as it allows me to spend time on solving the unsolved problems which are more meaningful uses of my time because they are more interesting for what I find important. This bit of writing now is 100% human generated because I value the process of thinking as mediated through transmutation of thought into written word. I don't need or want or value a machine that might write, even if it produces a product with something like my own voice. If the product were the only point then my perspective might be different. But it is my choice in any case. And I value the silence and noise of the dialogue in my own thought and the rolling out of that thought into the empty space of the page.
Focus on meaning clarifies for me one thing that has bothered me over the past few months. It's not just the hype cycle itself but rather the feeling of helplessness that a hype cycle often leaves in its wake. Especially for criticism and questioning, we're offered two poles — resist or submit. But that's a false choice. We get to choose what meaning we find in the things we do, whether as educators or simply as humans. And we get to choose whether to find something meaningful in any new technology and our use, non-use, partial-use, experimentation, full adoption, or any other interaction.
As in so many things, when we focus on the technology we lose our way. We need to remind ourselves constantly that we are the agents and we need to exercise that agency with awareness and intention. In classrooms, in reading, in writing, in taking a walk, in contemplating the future. The chat-bots will still be faking it for the foreseeable future. But we are the meaning-makers, no matter how urgent the unveiling of technologies or the buzz of hype.