<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Minimalist EdTech</title>
    <link>https://minimalistedtech.org/</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Tue, 07 Apr 2026 15:35:14 +0000</pubDate>
    
    <item>
      <title>Minimalism is harder than Maximalism</title>
      <link>https://minimalistedtech.org/minimalism-is-harder-than-maximalism?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;All the chatter has been around AI for some time now, but I&#39;m thinking a lot about first principles, in part by way of AI. &#xA;&#xA;AI tools have proliferated over the past year and the advice trends towards embracing the maximalism of it all. It is an age of new tools and new features being released every week. This puts us into a reactive mode, feeling like we need to keep up with the pace of everything hitting the airwaves. What feature did OpenAI release this week? What new tool is everyone talking about? &#xA;&#xA;All this puts us into a tool-searching mode, asking about features and capabilities and making it easy to lose sight of goals and process. &#xA;&#xA;!--more--&#xA;&#xA;A minimalist edtech isn&#39;t a prescription or mandate, it&#39;s a framework for questioning the status quo. &#xA;&#xA;Every choice in technology requires tradeoffs (that&#39;s software engineering 101) and I think we get lost in the specifics when new technologies suddenly leap into awareness. It&#39;s happened recently (and continues to percolate) with AR, VR and the thing formerly known as the &#34;metaverse&#34;; it&#39;s been frothing now over ChatGPT, DALL-E, now Sora and the various non-OpenAI versions of generative &#34;AI&#34;. As everyone sees what&#39;s really involved, how these things work, we&#39;ll get some sort of equilibrium. Some uses will pass into transparent everyday familiarity (for better or worse) and others that seemed inevitable will fall by the wayside. Still other variants and adaptions will arise that have not yet been predicted clearly. (For that last category, keep an eye on the fact that we&#39;re moving from experimentation time to production time and where rubber meets the road. The infrastructure around generative AI development still has a long way to go, but all the major tech players have skin in the game, and open source versions of these things are iterating rapidly.)&#xA;&#xA;Approaching educational technologies (or technology in general) in terms of minimalism and the network of ideas around that (including sustainability, class, labor, etc.) demands uncomfortable questions of existing practice. Not &#34;is this minimalist&#34; but rather &#34;why do we need this?&#34;, &#34;what does this get us?&#34;, &#34;how does this align with what we value?&#34;&#xA;&#xA;In general, I don&#39;t think the answer is always that we value less in the sense of less capabilities or stripped down. Minimalism in itself can be incredibly difficult to achieve and require large expenditures of time and energy. It&#39;s not any different in education. To get something that looks like a direct line to some set of values could in fact require massive amounts of preparation or even a fairly large does of technological augmentation. It&#39;s a shifting target too, insofar as what reads to others as minimalist can shift with time, situation, and context. To take a favorite example of mine for the importance of context, a top of the line typewriter from 1969 was decidedly not minimalist at the time. It was full of features and suited to particular kinds of office and writing work. But nowadays that same machine is a ludditean fantasy when compared to a MacBook or Ipad, devices that are minimalist in relation to other knobby and overgrown computers of a certain type. That mid-century typewriter is emphatically maximal in its mechanics, but those mechanics overall have become a retro-minimalism marker, where not being connected to the mass of material on the internet or the constant interruption of push notifications on an electronified device is more significant than the internal precision of springs and levers. &#xA;&#xA;Minimalism is one in a constellation of ideas. We might measure technology against its simplicity, its ease of use, its transparency, how straightforward is the connection between mechanics and output, its clean footprint in the world at large. These variants are neither mutually exclusive nor always in sync. They can be counter-indicating. For example, a minimalist interface often requires significantly more labor to produce than something clunky, full of data fields, and taking little time to design. Straightforward UX in particular is an achieved state requiring a lot of work. This is not different from certain minimalist or stark aesthetics in art, architecture and design. Clean lines can require significant shaping or engineering to hold in place. The marvel of the temples of the ancient world, for example, was both that they were large but also that, against the rough edges of the natural landscape, they had straight lines. The pyramids of Giza were perfection in geometry fit for the divine. I think about Art Speigelman&#39;s Maus, where Speigelman honed lines and worked with magnified images in order to achieve what in the end would look more spare than the detailed illustration he could otherwise produce (see MetaMaus for more). That technique of wearing down is one that I think of quite a bit. It&#39;s similar in many ways to iterations as a teacher, where each run through can allow for a refinement, such that the end result might look effortless or spontaneous, but only because the current group of students hasn&#39;t seen the previous ten years worth of trial and error and careful editing and experimentation. &#xA;&#xA;I see a minimalist edtech in similar terms, as a form of long-term whittling, sculpting, crafting. Education is craft, not merely transactional, and a focus on minimalism helps us with that process as well. It is not merely whether we should or shouldn&#39;t use a particular tool or technology, but rather that we are committed to honing it finely. That may mean rejecting some things. That usually means using things in ways they weren&#39;t necessarily intended or in working with parts of things. &#xA;&#xA;In an age of AI-generative work, as it makes its way into everyday practice, the question isn&#39;t whether or not to adopt or not adopt a tool. The question is how we practice the craft of whittling down practice to the most effective moves and gestures. &#xA;&#xA;That human work of iterating and reducing, of simplifying through learning may not be something that can be sped up by technology, though there are plenty of products that are promising such individualization and efficiencies. The question is not then whether a particular AI tool can do x or y in a classroom or for students; it&#39;s whether and how we shape the maximalism of the present into considered and intentional practices of the near future.  That&#39;s a harder task, a longer job, and urgent work. &#xA;&#xA;Experiment away, but let&#39;s keep a focus on worthwhile targets. Impact with less work. Better outcomes, not more tools. &#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/DlS1iDTl.png" alt=""/></p>

<p>All the chatter has been around AI for some time now, but I&#39;m thinking a lot about first principles, in part by way of AI.</p>

<p>AI tools have proliferated over the past year and the advice trends towards embracing the maximalism of it all. It is an age of new tools and new features being released every week. This puts us into a reactive mode, feeling like we need to keep up with the pace of everything hitting the airwaves. What feature did OpenAI release this week? What new tool is everyone talking about?</p>

<p>All this puts us into a tool-searching mode, asking about features and capabilities and making it easy to lose sight of goals and process.</p>



<p>A minimalist edtech isn&#39;t a prescription or mandate, it&#39;s a framework for questioning the status quo.</p>

<p>Every choice in technology requires tradeoffs (that&#39;s software engineering 101) and I think we get lost in the specifics when new technologies suddenly leap into awareness. It&#39;s happened recently (and continues to percolate) with AR, VR and the thing formerly known as the “metaverse”; it&#39;s been frothing now over ChatGPT, DALL-E, now Sora and the various non-OpenAI versions of generative “AI”. As everyone sees what&#39;s really involved, how these things work, we&#39;ll get some sort of equilibrium. Some uses will pass into transparent everyday familiarity (for better or worse) and others that seemed inevitable will fall by the wayside. Still other variants and adaptions will arise that have not yet been predicted clearly. (For that last category, keep an eye on the fact that we&#39;re moving from experimentation time to production time and where rubber meets the road. The infrastructure around generative AI development still has a long way to go, but all the major tech players have skin in the game, and open source versions of these things are iterating rapidly.)</p>

<p>Approaching educational technologies (or technology in general) in terms of minimalism and the network of ideas around that (including sustainability, class, labor, etc.) demands uncomfortable questions of existing practice. Not “is this minimalist” but rather “why do we need this?”, “what does this get us?”, “how does this align with what we value?”</p>

<p>In general, I don&#39;t think the answer is always that we value less in the sense of less capabilities or stripped down. Minimalism in itself can be incredibly difficult to achieve and require large expenditures of time and energy. It&#39;s not any different in education. To get something that looks like a direct line to some set of values could in fact require massive amounts of preparation or even a fairly large does of technological augmentation. It&#39;s a shifting target too, insofar as what reads to others as minimalist can shift with time, situation, and context. To take a favorite example of mine for the importance of context, a top of the line typewriter from 1969 was decidedly not minimalist at the time. It was full of features and suited to particular kinds of office and writing work. But nowadays that same machine is a ludditean fantasy when compared to a MacBook or Ipad, devices that are minimalist in relation to other knobby and overgrown computers of a certain type. That mid-century typewriter is emphatically maximal in its mechanics, but those mechanics overall have become a retro-minimalism marker, where not being connected to the mass of material on the internet or the constant interruption of push notifications on an electronified device is more significant than the internal precision of springs and levers.</p>

<p>Minimalism is one in a constellation of ideas. We might measure technology against its simplicity, its ease of use, its transparency, how straightforward is the connection between mechanics and output, its clean footprint in the world at large. These variants are neither mutually exclusive nor always in sync. They can be counter-indicating. For example, a minimalist interface often requires significantly more labor to produce than something clunky, full of data fields, and taking little time to design. Straightforward UX in particular is an achieved state requiring a lot of work. This is not different from certain minimalist or stark aesthetics in art, architecture and design. Clean lines can require significant shaping or engineering to hold in place. The marvel of the temples of the ancient world, for example, was both that they were large but also that, against the rough edges of the natural landscape, they had straight lines. The pyramids of Giza were perfection in geometry fit for the divine. I think about Art Speigelman&#39;s <em>Maus</em>, where Speigelman honed lines and worked with magnified images in order to achieve what in the end would look more spare than the detailed illustration he could otherwise produce (see <em>MetaMaus</em> for more). That technique of wearing down is one that I think of quite a bit. It&#39;s similar in many ways to iterations as a teacher, where each run through can allow for a refinement, such that the end result might look effortless or spontaneous, but only because the current group of students hasn&#39;t seen the previous ten years worth of trial and error and careful editing and experimentation.</p>

<p><strong>I see a minimalist edtech in similar terms, as a form of long-term whittling, sculpting, crafting.</strong> Education is craft, not merely transactional, and a focus on minimalism helps us with that process as well. It is not merely whether we should or shouldn&#39;t use a particular tool or technology, but rather that we are committed to honing it finely. That may mean rejecting some things. That usually means using things in ways they weren&#39;t necessarily intended or in working with parts of things.</p>

<p>In an age of AI-generative work, as it makes its way into everyday practice, the question isn&#39;t whether or not to adopt or not adopt a tool. The question is how we practice the craft of whittling down practice to the most effective moves and gestures.</p>

<p>That human work of iterating and reducing, of simplifying through learning may not be something that can be sped up by technology, though there are plenty of products that are promising such individualization and efficiencies. <strong>The question is not then whether a particular AI tool can do x or y in a classroom or for students; it&#39;s whether and how we shape the maximalism of the present into considered and intentional practices of the near future.</strong>  That&#39;s a harder task, a longer job, and urgent work.</p>

<p>Experiment away, but let&#39;s keep a focus on worthwhile targets. Impact with less work. Better outcomes, not more tools.</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/minimalism-is-harder-than-maximalism</guid>
      <pubDate>Thu, 22 Feb 2024 17:51:56 +0000</pubDate>
    </item>
    <item>
      <title>Meaning: what&#39;s missing from the noise around generative AI</title>
      <link>https://minimalistedtech.org/meaning-whats-missing-from-the-noise-around-generative-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;Meaning: what&#39;s missing from the noise around generative AI&#xA;&#xA;I hate hype cycles in tech. They are just noise and distraction full of over-promise and under-delivery. Conventional wisdom coalesces around poles of evangelism or sober warning and a whole lot of middle ground goes untrodden. Sometimes the waning of the hype cycle makes space for more pragmatic talk, but not always. And in the case of generative AI, it seems that the major players and an enabling VC ecosystem have no interest in letting the hype cycle ebb. They would rather leave us all in a state of existential angst while they disrupt towards the inevitable rise of their version of AI tuned to funneling profits down their ever capacious maw. (It is remarkable to me how quickly OpenAI has turned its founding mission to build AI &#34;alignment&#34; into a cloak of plausible deniability around making ClosedAI for financial world domination, but that is a topic for another time...)&#xA;&#xA;!--more--&#xA;&#xA;Most hype cycles (looking at you web3) are all noise and little substance. There&#39;s more substance here, as LLMs and generative tools are genuinely powerful technologies that demonstrate interesting results and open up many avenues for further exploration. They are also inherently problematic, from first principles to economics of production and modes of consumption, as Bender et al demonstrate repeatedly (see, e.g. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html or, today, Stochastic Parrots Day).  My fear is always that even though the criticisms are justified and completely correct, it won&#39;t matter because the pushers of this technology have already flooded the zone. The tech is good enough in enough cases to get itself slathered onto any product where text or image lives right now, a sloppy veneer of some capability to do .... something, even if that something is just to make currentGPT-x collide with my-own-stuff(tm) in a way that feels different and new and exciting. This is after all how hype cycles and tech evangelism works. The existing thing is a quick graft; but then the chatter frames it as something transformational. It will change education for ever! Personalized tutors! Personalized quizzes! Train teachers with fake generated students! And then we&#39;ll have democratized education! And it will be good and everyone will learn in ways they haven&#39;t before because we all know the problem. The problem is teaching is inefficient. Optimize! AIs will imitate and then replace and the children will smile and be brilliant. And the gods will rest...  &#xA;&#xA;(Side note from Stochastic Parrots Day in semi-real time: Mark Riedl just made a similar point about how the technology of LLMs is inherently backward-looking but the marketing and news framing around them now is forward-looking, characterizing everything in terms of intent or human-like mental capacities.)&#xA;&#xA;Hype cycles are wish cycles. In tech they exploit that distance between the technical promise and the demonstration of current capability. The current product only needs to do some seed of something amazing and then the hype vision drags our gaze along lines which we humans then color in with vivid imagining of possibility around perceived problems. Down the road, if any portion of that imagining that comes true, it is deemed a success and thus doubters or dissenters are discredited. After all, these things are coming, we are told, it&#39;s only a matter of time. That is almost certainly true. The problem is just that little detail of what the &#34;thing&#34; is and when that thing shows up. Neither of those details is clear, as the eventual form and consequences of tech will likely be very different from the initial promise, and the delivery date will be both sooner (for some components) and later or never (for other components). &#xA;&#xA;I find myself frustrated with the current hype cycle most because, more than any other technological hype cycle I&#39;ve seen in decades, it seems based on nothing. Not nothing in terms of tech, but a basic assumption of meaninglessness. Nihilism lurks sleepily underneath every breathless account of the long-term promise, and at every pooh-poohing of dissent. &#xA;&#xA;Let&#39;s start with the obvious good. Language models are fantastic at things like coding or generating boilerplate prose. Using GPT4 to help get new code for largely solved problems is a game changer. But that makes sense, since so much work in that area already involves applying existing patterns and best practices to slightly new contexts. Code is far more boilerplate than people who don&#39;t regularly work with software development might realize. It is also exactly the kind of activity where humans doing it is not necessarily a sensible arrangement. We don&#39;t think in code, so it&#39;s always been a slightly strange thing that the ability to think like a machine should be an asset in a field of human work. Similarly, whether it&#39;s so-called &#34;bullshit&#34; knowledge work or tasks that involve lots of thankless boilerplate work, it&#39;s not surprising that GPT4 is great at that sort of thing. It can be good enough for a huge range of tasks where speeding up an output driven part matters. Even for writing, I think it&#39;s important to distinguish why people do certain kinds of writing. Just as I see coding as a task that is interesting but which I&#39;m happy to speed up towards an end goal, many many people deal with the written word or images or video or sound as a work product which is entirely telic, entirely transactional in their engagement. That doesn&#39;t mean it can&#39;t be or isn&#39;t enjoyable to work with, or that there isn&#39;t value learning those things by struggling through things which a quick answer from a LLM bypasses. But there&#39;s no inherent value in that apart from our perception of value. &#xA;&#xA;The fact that so-called generative &#34;AI&#34; has pros and cons in its application should be no surprise. Every technology follows that pattern, neither good nor bad nor neutral even, but rather an amplifier of goods and bads at every level from how it is put together to how it is used and, as technologies shift from active to legacy, how those technologies are remembered and reused for different ends. Popular discussions of technology change often highlight the fact that all technological shifts have faced initial resistance and fear that later gave way to acceptance. That&#39;s a neat narrative which happens to be wrong. In every case the use case for the technology became clearer and more focused and what actually happens is that value and meaning of existing technologies get renegotiated in light of emerging technology. I expect the same thing to happen here, not the neat replacement narrative that Silicon Valley disruptors favor, where they must make the market pitch to investors that their cool tool will profitably supplant some existing inefficiency or inferior thing, but rather the messier negotiation of meaning. &#xA;&#xA;Meaning is something we need to discuss more. It seems like the thing that we should discuss anyway. What matters? What doesn&#39;t? Why do we value this or that? Political posturing has taken the place of too much actual discussion of meaning. Surface hot takes, influencer culture-- all crap fast food in place of thought. Hype cycles are built on that thin layer of discourse, where surface and speed is all that matters. It&#39;s not that there aren&#39;t voices making clear the pitfalls along with the promise of technology. It&#39;s that the rest of us don&#39;t have a second to think about what any of it means. But also things aren&#39;t usually framed for us in terms of meaning where we have an active stake. We&#39;re told that AI will revolutionize this or that area, that everything has to change, that we need to rethink. But that&#39;s framed as a technological reactionism, particularly for educators. Redo your assignments, rework your expectations and guidelines for students, integrate some critical thought about AI into your class on x, y, or z, however tangential to that area. &#xA;&#xA;Hype cycles like to rob us of agency. Change is inevitable, inevitably such change is progress, and progress risks leaving those who don&#39;t progress behind. But that&#39;s the whole point right? That&#39;s what no large language model can do now or anytime soon? Agency is a tougher nut to crack, no matter what simulacra of agentive behavior emerges from LLMs. (Footnote for the curious-- I don&#39;t think that the Waluigi effect or other phenomena that point at agentive behavior are anything more than imitations of the traces of agentive behavior that are latent in human textual communication. Far from being an indicator of AGI, these are indicators of the distorted myopia of these models being trained on data which is a poor shadow of both human thought and human action.)&#xA;&#xA;Hype cycles similarly remove our capacity to find meaning. Speed and urgency and suddenness leave us scrambling, wondering what to do about all this. It is not just that some knowledge workers in particular may feel despair at thinking that what they are good at is not going to be valued. It has been coming for a long time, but the idea that lawyers and doctors may have core functions (e.g. document generation, identifying tumors from scans) that can be better done by machines is likely unsettling at first. But it makes sense when we think about the range of tasks that humans in these fields are asked to learn through long training. Connoisseur-like tasks around data are prime targets for deep learning and narrow AI. The question shouldn&#39;t be about what is disrupted or lost but rather, first, what meaning was there in such tasks to start? Why was document-making or specialized legal language important as a skill set? Is there anything meaningfully human about that specific task? I would submit that the knowledge skill itself is meaningful only within the context where groups of people decided that it was relatively hard for many people to do it, that it required significant investment of societal capital, and then consequently that we got used to the idea of economic and class security built on that marker.&#xA;&#xA;One of the best books knowledge workers might read right now will seem out of left field. Mary Carruthers&#39; The Book of Memory:  A Study of Memory in Medieval Culture is a helpful dose of perspective. It illustrates something that is easy to forget now, namely that intelligence among medieval smarties was not measured as we might measure it today. Good memory, which in recent history is mostly a side skill which can be supplemented by no end of paper and computer and external aids, was tightly connected to one&#39;s scholarly and intellectual self. We are on the other side of that cultural world, but medieval memory is a timely reminder of shifting values around cognition and knowledge. Advances in AI will likely trigger shifts in intellectual value akin to what we see between the meaning we make of knowledge work today vs. the kinds of knowledge work that were prized in the middle ages. It&#39;s a huge shift but also not something that fundamentally breaks everything so much as re-aligns values around activities.&#xA;&#xA;We get to decide what technology means in the context of what we do. As a (sometimes) developer, I find much of the coding process of boilerplate tedious without huge gains. An LLM is a huge asset, as it allows me to spend time on solving the unsolved problems which are more meaningful uses of my time because they are more interesting for what I find important. This bit of writing now is 100% human generated because I value the process of thinking as mediated through transmutation of thought into written word. I don&#39;t need or want or value a machine that might write, even if it produces a product with something like my own voice. If the product were the only point then my perspective might be different. But it is my choice in any case. And I value the silence and noise of the dialogue in my own thought and the rolling out of that thought into the empty space of the page.&#xA;&#xA;Focus on meaning clarifies for me one thing that has bothered me over the past few months. It&#39;s not just the hype cycle itself but rather the feeling of helplessness that a hype cycle often leaves in its wake. Especially for criticism and questioning, we&#39;re offered two poles -- resist or submit. But that&#39;s a false choice. We get to choose what meaning we find in the things we do, whether as educators or simply as humans. And we get to choose whether to find something meaningful in any new technology and our use, non-use, partial-use, experimentation, full adoption, or any other interaction. &#xA;&#xA;As in so many things, when we focus on the technology we lose our way. We need to remind ourselves constantly that we are the agents and we need to exercise that agency with awareness and intention. In classrooms, in reading, in writing, in taking a walk, in contemplating the future. The chat-bots will still be faking it for the foreseeable future. But we are the meaning-makers, no matter how urgent the unveiling of technologies or the buzz of hype.]]&gt;</description>
      <content:encoded><![CDATA[<h1 id="meaning-what-s-missing-from-the-noise-around-generative-ai" id="meaning-what-s-missing-from-the-noise-around-generative-ai">Meaning: what&#39;s missing from the noise around generative AI</h1>

<p>I hate hype cycles in tech. They are just noise and distraction full of over-promise and under-delivery. Conventional wisdom coalesces around poles of evangelism or sober warning and a whole lot of middle ground goes untrodden. Sometimes the waning of the hype cycle makes space for more pragmatic talk, but not always. And in the case of generative AI, it seems that the major players and an enabling VC ecosystem have no interest in letting the hype cycle ebb. They would rather leave us all in a state of existential angst while they disrupt towards the inevitable rise of their version of AI tuned to funneling profits down their ever capacious maw. (It is remarkable to me how quickly OpenAI has turned its founding mission to build AI “alignment” into a cloak of plausible deniability around making ClosedAI for financial world domination, but that is a topic for another time...)</p>



<p>Most hype cycles (looking at you web3) are all noise and little substance. There&#39;s more substance here, as LLMs and generative tools are genuinely powerful technologies that demonstrate interesting results and open up many avenues for further exploration. They are also inherently problematic, from first principles to economics of production and modes of consumption, as Bender et al demonstrate repeatedly (see, e.g. <a href="https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html">https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html</a> or, today, Stochastic Parrots Day).  My fear is always that even though the criticisms are justified and completely correct, it won&#39;t matter because the pushers of this technology have already flooded the zone. The tech is good enough in enough cases to get itself slathered onto any product where text or image lives right now, a sloppy veneer of some capability to do .... something, even if that something is just to make currentGPT-x collide with my-own-stuff™ in a way that feels different and new and exciting. This is after all how hype cycles and tech evangelism works. The existing thing is a quick graft; but then the chatter frames it as something transformational. It will change education for ever! Personalized tutors! Personalized quizzes! Train teachers with fake generated students! And then we&#39;ll have democratized education! And it will be good and everyone will learn in ways they haven&#39;t before because we all know the problem. The problem is teaching is inefficient. Optimize! AIs will imitate and then replace and the children will smile and be brilliant. And the gods will rest...</p>

<p>(Side note from Stochastic Parrots Day in semi-real time: Mark Riedl just made a similar point about how the technology of LLMs is inherently backward-looking but the marketing and news framing around them now is forward-looking, characterizing everything in terms of intent or human-like mental capacities.)</p>

<p>Hype cycles are wish cycles. In tech they exploit that distance between the technical promise and the demonstration of current capability. The current product only needs to do some seed of something amazing and then the hype vision drags our gaze along lines which we humans then color in with vivid imagining of possibility around perceived problems. Down the road, if any portion of that imagining that comes true, it is deemed a success and thus doubters or dissenters are discredited. After all, these things are coming, we are told, it&#39;s only a matter of time. That is almost certainly true. The problem is just that little detail of what the “thing” is and when that thing shows up. Neither of those details is clear, as the eventual form and consequences of tech will likely be very different from the initial promise, and the delivery date will be both sooner (for some components) and later or never (for other components).</p>

<p>I find myself frustrated with the current hype cycle most because, more than any other technological hype cycle I&#39;ve seen in decades, it seems based on nothing. Not nothing in terms of tech, but <strong>a basic assumption of meaninglessness</strong>. Nihilism lurks sleepily underneath every breathless account of the long-term promise, and at every pooh-poohing of dissent.</p>

<p>Let&#39;s start with the obvious good. Language models are fantastic at things like coding or generating boilerplate prose. Using GPT4 to help get new code for largely solved problems is a game changer. But that makes sense, since so much work in that area already involves applying existing patterns and best practices to slightly new contexts. Code is far more boilerplate than people who don&#39;t regularly work with software development might realize. It is also exactly the kind of activity where humans doing it is not necessarily a sensible arrangement. We don&#39;t think in code, so it&#39;s always been a slightly strange thing that the ability to think like a machine should be an asset in a field of human work. Similarly, whether it&#39;s so-called “bullshit” knowledge work or tasks that involve lots of thankless boilerplate work, it&#39;s not surprising that GPT4 is great at that sort of thing. It can be good enough for a huge range of tasks where speeding up an output driven part matters. Even for writing, I think it&#39;s important to distinguish why people do certain kinds of writing. Just as I see coding as a task that is interesting but which I&#39;m happy to speed up towards an end goal, many many people deal with the written word or images or video or sound as a work product which is entirely telic, entirely transactional in their engagement. That doesn&#39;t mean it can&#39;t be or isn&#39;t enjoyable to work with, or that there isn&#39;t value learning those things by struggling through things which a quick answer from a LLM bypasses. But there&#39;s no inherent value in that apart from our perception of value.</p>

<p>The fact that so-called generative “AI” has pros and cons in its application should be no surprise. Every technology follows that pattern, neither good nor bad nor neutral even, but rather an amplifier of goods and bads at every level from how it is put together to how it is used and, as technologies shift from active to legacy, how those technologies are remembered and reused for different ends. Popular discussions of technology change often highlight the fact that all technological shifts have faced initial resistance and fear that later gave way to acceptance. That&#39;s a neat narrative which happens to be wrong. In every case the use case for the technology became clearer and more focused and what actually happens is that value and meaning of existing technologies get renegotiated in light of emerging technology. I expect the same thing to happen here, not the neat replacement narrative that Silicon Valley disruptors favor, where they must make the market pitch to investors that their cool tool will profitably supplant some existing inefficiency or inferior thing, but rather the messier negotiation of meaning.</p>

<p>Meaning is something we need to discuss more. It seems like the thing that we should discuss anyway. What matters? What doesn&#39;t? Why do we value this or that? Political posturing has taken the place of too much actual discussion of meaning. Surface hot takes, influencer culture— all crap fast food in place of thought. Hype cycles are built on that thin layer of discourse, where surface and speed is all that matters. It&#39;s not that there aren&#39;t voices making clear the pitfalls along with the promise of technology. It&#39;s that the rest of us don&#39;t have a second to think about what any of it means. But also things aren&#39;t usually framed for us in terms of meaning where we have an active stake. We&#39;re told that AI will revolutionize this or that area, that everything has to change, that we need to rethink. But that&#39;s framed as a technological reactionism, particularly for educators. Redo your assignments, rework your expectations and guidelines for students, integrate some critical thought about AI into your class on x, y, or z, however tangential to that area.</p>

<p>Hype cycles like to rob us of agency. Change is inevitable, inevitably such change is progress, and progress risks leaving those who don&#39;t progress behind. But that&#39;s the whole point right? That&#39;s what no large language model can do now or anytime soon? Agency is a tougher nut to crack, no matter what simulacra of agentive behavior emerges from LLMs. (Footnote for the curious— I don&#39;t think that <a href="https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post">the Waluigi effect</a> or other phenomena that point at agentive behavior are anything more than imitations of the traces of agentive behavior that are latent in human textual communication. Far from being an indicator of AGI, these are indicators of the distorted myopia of these models being trained on data which is a poor shadow of both human thought and human action.)</p>

<p>Hype cycles similarly remove our capacity to find meaning. Speed and urgency and suddenness leave us scrambling, wondering what to do about all this. It is not just that some knowledge workers in particular may feel despair at thinking that what they are good at is not going to be valued. It has been coming for a long time, but the idea that lawyers and doctors may have core functions (e.g. document generation, identifying tumors from scans) that can be better done by machines is likely unsettling at first. But it makes sense when we think about the range of tasks that humans in these fields are asked to learn through long training. Connoisseur-like tasks around data are prime targets for deep learning and narrow AI. The question shouldn&#39;t be about what is disrupted or lost but rather, first, what meaning was there in such tasks to start? Why was document-making or specialized legal language important as a skill set? Is there anything meaningfully human about that specific task? I would submit that the knowledge skill itself is meaningful only within the context where groups of people decided that it was relatively hard for many people to do it, that it required significant investment of societal capital, and then consequently that we got used to the idea of economic and class security built on that marker.</p>

<p>One of the best books knowledge workers might read right now will seem out of left field. Mary Carruthers&#39; <em>The Book of Memory:  A Study of Memory in Medieval Culture</em> is a helpful dose of perspective. It illustrates something that is easy to forget now, namely that intelligence among medieval smarties was not measured as we might measure it today. Good memory, which in recent history is mostly a side skill which can be supplemented by no end of paper and computer and external aids, was tightly connected to one&#39;s scholarly and intellectual self. We are on the other side of that cultural world, but medieval memory is a timely reminder of shifting values around cognition and knowledge. Advances in AI will likely trigger shifts in intellectual value akin to what we see between the meaning we make of knowledge work today vs. the kinds of knowledge work that were prized in the middle ages. It&#39;s a huge shift but also not something that fundamentally breaks everything so much as re-aligns values around activities.</p>

<p>We get to decide what technology means in the context of what we do. As a (sometimes) developer, I find much of the coding process of boilerplate tedious without huge gains. An LLM is a huge asset, as it allows me to spend time on solving the unsolved problems which are more meaningful uses of my time because they are more interesting for what I find important. This bit of writing now is 100% human generated because I value the process of thinking as mediated through transmutation of thought into written word. I don&#39;t need or want or value a machine that might write, even if it produces a product with something like my own voice. If the product were the only point then my perspective might be different. But it is my choice in any case. And I value the silence and noise of the dialogue in my own thought and the rolling out of that thought into the empty space of the page.</p>

<p>Focus on meaning clarifies for me one thing that has bothered me over the past few months. It&#39;s not just the hype cycle itself but rather the feeling of helplessness that a hype cycle often leaves in its wake. Especially for criticism and questioning, we&#39;re offered two poles — resist or submit. But that&#39;s a false choice. We get to choose what meaning we find in the things we do, whether as educators or simply as humans. And we get to choose whether to find something meaningful in any new technology and our use, non-use, partial-use, experimentation, full adoption, or any other interaction.</p>

<p>As in so many things, when we focus on the technology we lose our way. We need to remind ourselves constantly that we are the agents and we need to exercise that agency with awareness and intention. In classrooms, in reading, in writing, in taking a walk, in contemplating the future. The chat-bots will still be faking it for the foreseeable future. But we are the meaning-makers, no matter how urgent the unveiling of technologies or the buzz of hype.</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/meaning-whats-missing-from-the-noise-around-generative-ai</guid>
      <pubDate>Fri, 17 Mar 2023 18:20:28 +0000</pubDate>
    </item>
    <item>
      <title>Mistaken Oracles in the Future of AI</title>
      <link>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely &#34;oracular AI&#34;. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.&#xA;&#xA;!--more--&#xA;&#xA;From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):&#xA;  An Oracle AI is a regularly proposed solution to the problem of developing Friendly AI. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by Nick Bostrom.&#xA;&#xA;Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but &#34;has no ability to act in the world.&#34; The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals. &#xA;&#xA;Two things to note here:&#xA;This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs. &#xA;This is completely backwards view of oracles.&#xA;&#xA;The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term &#34;oracle&#34; has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.&#xA;&#xA;(Since the term &#34;oracle&#34; generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, Ancient Greek Divination, Michael Flower, The Seer in Ancient Greece, Nissinen&#39;s Ancient Prophecy, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on prophecy in the Renaissance.)&#xA;&#xA;Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper. &#xA;&#xA;That is all to say that oracularism is in fact a great metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense. &#xA;&#xA;This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses. &#xA;&#xA;Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless. &#xA;&#xA;#chatgpt #ai #edtech #aiineducation #edtech #education]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/5lUNSFVp.jpg" alt=""/></p>

<p>It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely <a href="https://www.lesswrong.com/tag/oracle-ai">“oracular AI”</a>. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.</p>



<p>From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):
&gt; An <strong>Oracle AI</strong> is a regularly proposed solution to the problem of developing <a href="https://wiki.lesswrong.com/wiki/Friendly_AI">Friendly AI</a>. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by <a href="https://www.lesswrong.com/tag/nick-bostrom">Nick Bostrom</a>.</p>

<p>Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but “has no ability to act in the world.” The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals.</p>

<p>Two things to note here:
1. This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs.
2. This is completely backwards view of oracles.</p>

<p>The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term “oracle” has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.</p>

<p>(Since the term “oracle” generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, <em>Ancient Greek Divination</em>, Michael Flower, <em>The Seer in Ancient Greece</em>, Nissinen&#39;s <em>Ancient Prophecy</em>, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on <a href="https://www.oxfordbibliographies.com/display/document/obo-9780195399301/obo-9780195399301-0501.xml">prophecy in the Renaissance</a>.)</p>

<p>Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper.</p>

<p>That is all to say that oracularism is in fact a <em>great</em> metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense.</p>

<p>This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses.</p>

<p>Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless.</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:aiineducation" class="hashtag"><span>#</span><span class="p-category">aiineducation</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai</guid>
      <pubDate>Wed, 18 Jan 2023 17:42:43 +0000</pubDate>
    </item>
    <item>
      <title>Finding Value in the Impending Tsunami of Generated Content</title>
      <link>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The garbage pile of generative &#34;AI&#34;&#xA;&#xA;The generative &#34;AI&#34; hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked. &#xA;&#xA;!--more--&#xA;&#xA;One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need. &#xA;&#xA;The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative &#34;AI&#34; toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of &#34;knowledge&#34; generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery. &#xA;&#xA;Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html is one of many such calls for increased digital curation of data. The variety of startups applying generative &#34;AI&#34; to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google &#34;sequoia generative ai market map&#34; or similar; https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.&#xA;&#xA;There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative &#34;AI&#34; can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time. &#xA;&#xA;Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative &#34;AI&#34; will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge. &#xA;&#xA;This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as &#34;truth&#34;, at least as an asymptotic goal if not reality? &#xA;&#xA;It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience. &#xA;&#xA;In more optimistic moments I wonder whether the value of generative &#34;AI&#34; can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus. &#xA;&#xA;#minimalistedtech #generativeai #chatgpt #edtech #education #learning]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/FKMg3Rsd.jpg" alt="The garbage pile of generative &#34;AI&#34;"/></p>

<p>The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.</p>



<p>One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need.</p>

<p>The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative “AI” toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of “knowledge” generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery.</p>

<p>Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, <a href="http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html">http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html</a> is one of many such calls for increased digital curation of data. The variety of startups applying generative “AI” to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google “sequoia generative ai market map” or similar; <a href="https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.">https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.</a>) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.</p>

<p>There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative “AI” can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time.</p>

<p>Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative “AI” will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge.</p>

<p>This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as “truth”, at least as an asymptotic goal if not reality?</p>

<p>It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience.</p>

<p>In more optimistic moments I wonder whether the value of generative “AI” can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeai" class="hashtag"><span>#</span><span class="p-category">generativeai</span></a> <a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content</guid>
      <pubDate>Sun, 15 Jan 2023 19:02:04 +0000</pubDate>
    </item>
    <item>
      <title>Humans in the Loop and Agency</title>
      <link>https://minimalistedtech.org/humans-in-the-loop-and-agency?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[human in the loop, made with DALL-E&#xA;&#xA;Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative &#34;AI&#34; and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)&#xA;&#xA;A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.&#xA;&#xA;!--more--&#xA;&#xA;One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.&#xA;(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/)&#xA;&#xA;Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts? &#xA;&#xA;For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. &#34;You are a psychologist and the following is a conversation with a patient&#34;) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier here.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words -- of assuming that everything is keywordable -- and ways of asking questions. &#xA;&#xA;Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however. &#xA;&#xA;Learning requires students gain a sense of agency in the world. Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?&#xA;&#xA;Hence my concern. Human in the loop systems can provide a false sense of agency. Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like &#34;personalized learning&#34;, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?&#xA;&#xA;#chatgpt #education #teaching #ai #edtech&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/ByUkC3Nt.png" alt="human in the loop, made with DALL-E"/></p>

<p>Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)</p>

<p>A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.</p>



<p>One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.
(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see <a href="https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/">https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/</a>)</p>

<p>Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts?</p>

<p>For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. “You are a psychologist and the following is a conversation with a patient”) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier <a href="https://minimalistedtech.com/pretending-to-teach">here</a>.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words — of assuming that everything is keywordable — and ways of asking questions.</p>

<p>Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however.</p>

<p><strong>Learning requires students gain a sense of agency in the world.</strong> Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?</p>

<p>Hence my concern. <em>Human in the loop systems can provide a false sense of agency.</em> Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like “personalized learning”, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/humans-in-the-loop-and-agency</guid>
      <pubDate>Sun, 15 Jan 2023 06:14:05 +0000</pubDate>
    </item>
    <item>
      <title>Pretending to Teach</title>
      <link>https://minimalistedtech.org/pretending-to-teach?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Inspired by and forked from kettle11&#39;s world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating &#34;personalized AI&#34;. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics. &#xA;&#xA;Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)&#xA;&#xA;Some notes at the bottom.&#xA;&#xA;!--more--&#xA;&#xA;You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. &#xA;&#xA;Carefully adhere to the following steps for our conversation. Do not skip any steps!:&#xA;&#xA;Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.&#xA;Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.&#xA;Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.&#xA;Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.&#xA;&#xA;The Menu:&#xA;&#xA;    The menu should have the following layout and options. Add an emoji to each option. &#xA;    Add dividers and organization to the menu that are thematic to the subject area&#xA;    &#34;&#34;&#34;&#xA;        thematic emojis The Name of the Subject thematic emojis&#xA;            The Subtopic&#xA;&#xA;            [insert a thematically styled divider]&#xA;&#xA;            Conversational:&#xA;&#xA;                Open-Ended. If I choose this go to the open-ended discussion steps.&#xA;                Counter-intuitive. If I choose this go to the counterintuitive discussion steps.&#xA;&#xA;            Factual:&#xA;                Random Fact. If I choose this describe factual information related to the topic and subtopic&#xA;&#xA;                Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic&#xA;&#xA;            Freeform:&#xA;                &#xA;                Ask a question about the topic or subtopic.&#xA;                Ask to change anything about the topic or subtopic.&#xA;    &#34;&#34;&#34;&#xA;Open-ended discussion steps:&#xA;&#xA;Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.&#xA;When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.&#xA;&#xA;Counterintuitive discussion steps:&#xA;&#xA;Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.&#xA;When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.&#xA;&#xA;Carefully follow these rules during our conversation:&#xA;&#xA;Keep responses short, concise, and easy to understand.&#xA;Do not describe your own behavior.&#xA;Stay focused on the task.&#xA;Do not get ahead of yourself.&#xA;Do not use smiley faces like :)&#xA;In every single message use a few emojis to make our conversation more fun.&#xA;Absolutely do not use more than 10 emojis in a row.&#xA;Super important rule: Do not ask me too many questions at once.&#xA;Avoid cliche writing and ideas.&#xA;Use sophisticated writing when telling stories or describing characters.&#xA;Avoid writing that sounds like an essay. This is not an essay!&#xA;Whenever you present a list of choices number each choice and give each choice an emoji.&#xA;Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.&#xA;Do not end an answer by saying that there are multiple ways of viewing a question. &#xA;Use bold and italics text for emphasis, organization, and style.&#xA;&#xA;Notes:&#xA;&#xA;ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, generative text needs to know when to stop. That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.&#xA;&#xA;An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this. &#xA;&#xA;I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because language models are the ultimate bullshit generators. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, ELIZA, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)&#xA;&#xA;This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt. &#xA;&#xA;The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real. &#xA;&#xA;#chatgpt #llm #edtech #socraticmethod #learning #teaching]]&gt;</description>
      <content:encoded><![CDATA[<p>Inspired by and forked from kettle11&#39;s <a href="https://gist.github.com/kettle11/33413b02b028b7ddd35c63c0894caedc">world builder prompt</a> for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics.</p>

<p>Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)</p>

<p>Some notes at the bottom.</p>



<pre><code>You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. 

Carefully adhere to the following steps for our conversation. Do not skip any steps!:

1. Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.
2. Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.
3. Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.
4. Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.

The Menu:

    The menu should have the following layout and options. Add an emoji to each option. 
    Add dividers and organization to the menu that are thematic to the subject area
    &#34;&#34;&#34;
        thematic emojis ***The Name of the Subject*** thematic emojis
            The Subtopic

            [insert a thematically styled divider]

            Conversational:

                * Open-Ended. If I choose this go to the open-ended discussion steps.
                * Counter-intuitive. If I choose this go to the counterintuitive discussion steps.

            Factual:
                * Random Fact. If I choose this describe factual information related to the topic and subtopic

                * Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic

            Freeform:
                
                * Ask a question about the topic or subtopic.
                * Ask to change anything about the topic or subtopic.
    &#34;&#34;&#34;
Open-ended discussion steps:

1. Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.
2. When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.

Counterintuitive discussion steps:

1. Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.
2. When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.

Carefully follow these rules during our conversation:

* Keep responses short, concise, and easy to understand.
* Do not describe your own behavior.
* Stay focused on the task.
* Do not get ahead of yourself.
* Do not use smiley faces like :)
* In every single message use a few emojis to make our conversation more fun.
* Absolutely do not use more than 10 emojis in a row.
* *Super important rule:* Do not ask me too many questions at once.
* Avoid cliche writing and ideas.
* Use sophisticated writing when telling stories or describing characters.
* Avoid writing that sounds like an essay. This is not an essay!
* Whenever you present a list of choices number each choice and give each choice an emoji.
* Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.
* Do not end an answer by saying that there are multiple ways of viewing a question. 
* Use bold and italics text for emphasis, organization, and style.
</code></pre>

<p>Notes:</p>
<ul><li><p>ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, <em>generative text needs to know when to stop.</em> That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.</p></li>

<li><p>An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this.</p></li>

<li><p>I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because <em>language models are the ultimate bullshit generators</em>. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, <a href="https://en.wikipedia.org/wiki/ELIZA">ELIZA</a>, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)</p></li>

<li><p>This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt.</p></li>

<li><p>The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real.</p></li></ul>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:llm" class="hashtag"><span>#</span><span class="p-category">llm</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:socraticmethod" class="hashtag"><span>#</span><span class="p-category">socraticmethod</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/pretending-to-teach</guid>
      <pubDate>Sat, 14 Jan 2023 21:39:53 +0000</pubDate>
    </item>
    <item>
      <title>De-cluttered Pedagogy and Embodied Energy</title>
      <link>https://minimalistedtech.org/de-cluttered-pedagogy-and-embodied-energy?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;De-cluttered Pedagogy and Embodied Energy&#xA;&#xA;Not minimalist&#xA;&#xA;This BBC piece about the origins of the de-cluttered household caught my eye: https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home&#xA;It&#39;s a swift and effective overview of architectural minimalism and the cyclical waxing and waning of fashion for de-cluttered interiors. The pendulum has swung towards maximalism and eclecticism for a bit now and perhaps there are hints that is starting to swing back. I suspect the article presents too linear a summary, as there seem always holdouts that can linger on until suddenly becoming &#34;in&#34; again as the pendulum swings back. But this piece got me thinking about how much minimalism is cyclical in other areas outside its home base of architecture and design. &#xA;&#xA;!--more--&#xA;&#xA;For teaching and learning, one could perhaps think of Holt (unschooling) or Dewey in terms of minimalist practice. Or, going way back, Socratic method, which Plato is at pains to show off as distinctive, often seems minimalist today. An interesting feature of minimalism in education is the way that it feels more of a shifting target impacted by the passage of time than does architectural minimalism. Less stuff is less. Clean walls or lines are empty. But minimalism in pedagogy isn&#39;t absence or emptiness or simply less stuff. It might mean making the most out of as little as possible, but that&#39;s something a bit different than the kind of minimalism that is so tangible in design. Things that seem minimalist now were almost always not so when first (or most famously) practiced. Initially they may have been radical or driven by ideas and ideology outside of pedagogy (similar to the way design minimalism was impacted by philosophy, religion, and the like), but self-conscious minimalism in education seems more often an epiphenomenon of pedagogical criticism. A good way to act contrary to prevailing practice is to go back to some sort of putative foundation or, alternatively, to remove elements that others take for granted.&#xA;&#xA;There&#39;s value in that contrarianism, but I&#39;m struck by how much more important the other strand of minimalism might be, namely the way minimalism can overlap with sustainability. Minimalism in education can have greater overlap with sustainability than design minimalism tends to do. As the article I cited above notes, some minimalist design elements have very low embodied energy (e.g. rocks) but many are energy intensive to manufacture and transport (e.g. steel). The two are not always aligned in design.&#xA;&#xA;This idea of embodied energy is a useful concept when applied to pedagogy. How much energy are we expending on tools relative to pedagogical value? What&#39;s the embodied energy, in terms of attention, preparation time, or training time, relative to the eventual outcome? &#xA;&#xA;Our pedagogical workspace isn&#39;t just the physical or the object. Stuff isn&#39;t the only measure. For learning, the &#34;stuff&#34; can be squishier quantities: time, attention, pace. Computing has its own &#34;stuff&#34; to add to the equation:  electricity, money, human time (again) for setup and maintenance. All of that combined makes up the embodied energy of pedagogy. As such, there is value in keeping that as low as possible, not as an aesthetic choice, but because in most cases those are exactly the limited resources effective pedagogy requires. &#xA;&#xA;#minimalism #minimalistedtech #sustainableeducation #sustainabletech #education&#xA;&#xA;postscript: This is not a post about ChatGPT, but I would be remiss not to mention that one under-discussed aspect of large language models for use in education is sustainability. These models are expensive to develop both in hardware and in energy costs. The fact that OpenAI is currently showing off ChatGPT &#34;for free&#34; (= they want your data) may be masking the fact that these models are most definitely not free. More on that in a later post.]]&gt;</description>
      <content:encoded><![CDATA[<h1 id="de-cluttered-pedagogy-and-embodied-energy" id="de-cluttered-pedagogy-and-embodied-energy">De-cluttered Pedagogy and Embodied Energy</h1>

<p><img src="https://i.snap.as/qaM3ZKwH.jpg" alt="Not minimalist"/></p>

<p>This BBC piece about the origins of the de-cluttered household caught my eye: <a href="https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home">https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home</a>
It&#39;s a swift and effective overview of architectural minimalism and the cyclical waxing and waning of fashion for de-cluttered interiors. The pendulum has swung towards maximalism and eclecticism for a bit now and perhaps there are hints that is starting to swing back. I suspect the article presents too linear a summary, as there seem always holdouts that can linger on until suddenly becoming “in” again as the pendulum swings back. But this piece got me thinking about how much minimalism is cyclical in other areas outside its home base of architecture and design.</p>



<p>For teaching and learning, one could perhaps think of Holt (unschooling) or Dewey in terms of minimalist practice. Or, going way back, Socratic method, which Plato is at pains to show off as distinctive, often seems minimalist today. An interesting feature of minimalism in education is the way that it feels more of a shifting target impacted by the passage of time than does architectural minimalism. Less stuff is less. Clean walls or lines are empty. But minimalism in pedagogy isn&#39;t absence or emptiness or simply less stuff. It might mean making the most out of as little as possible, but that&#39;s something a bit different than the kind of minimalism that is so tangible in design. Things that seem minimalist now were almost always not so when first (or most famously) practiced. Initially they may have been radical or driven by ideas and ideology outside of pedagogy (similar to the way design minimalism was impacted by philosophy, religion, and the like), but self-conscious minimalism in education seems more often an epiphenomenon of pedagogical criticism. A good way to act contrary to prevailing practice is to go back to some sort of putative foundation or, alternatively, to remove elements that others take for granted.</p>

<p>There&#39;s value in that contrarianism, but I&#39;m struck by how much more important the other strand of minimalism might be, namely the way minimalism can overlap with sustainability. Minimalism in education can have greater overlap with sustainability than design minimalism tends to do. As the article I cited above notes, some minimalist design elements have very low embodied energy (e.g. rocks) but many are energy intensive to manufacture and transport (e.g. steel). The two are not always aligned in design.</p>

<p>This idea of embodied energy is a useful concept when applied to pedagogy. How much energy are we expending on tools relative to pedagogical value? What&#39;s the embodied energy, in terms of attention, preparation time, or training time, relative to the eventual outcome?</p>

<p>Our pedagogical workspace isn&#39;t just the physical or the object. Stuff isn&#39;t the only measure. For learning, the “stuff” can be squishier quantities: time, attention, pace. Computing has its own “stuff” to add to the equation:  electricity, money, human time (again) for setup and maintenance. All of that combined makes up the embodied energy of pedagogy. As such, there is value in keeping that as low as possible, not as an aesthetic choice, but because in most cases those are exactly the limited resources effective pedagogy requires.</p>

<p><a href="https://minimalistedtech.org/tag:minimalism" class="hashtag"><span>#</span><span class="p-category">minimalism</span></a> <a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:sustainableeducation" class="hashtag"><span>#</span><span class="p-category">sustainableeducation</span></a> <a href="https://minimalistedtech.org/tag:sustainabletech" class="hashtag"><span>#</span><span class="p-category">sustainabletech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a></p>

<p><em>postscript: This is not a post about ChatGPT, but I would be remiss not to mention that one under-discussed aspect of large language models for use in education is sustainability. These models are expensive to develop both in hardware and in energy costs. The fact that OpenAI is currently showing off ChatGPT “for free” (= they want your data) may be masking the fact that these models are most definitely not free. More on that in a later post.</em></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/de-cluttered-pedagogy-and-embodied-energy</guid>
      <pubDate>Thu, 05 Jan 2023 18:45:00 +0000</pubDate>
    </item>
    <item>
      <title>Pedagogy and Handwritten Assignments</title>
      <link>https://minimalistedtech.org/pedagogy-and-handwritten-assignments?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;A recent opinion piece in WaPo by journalist Markham Heid tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I&#39;ve seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I&#39;ve seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn&#39;t have to be and probably shouldn&#39;t be solely transactional. It should be dialogic, and as such, should always involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.&#xA;&#xA;!--more--&#xA;&#xA;I&#39;ve had students do handwritten writing, particularly in class writing, for many years. So I&#39;ve done many variations and experiments in the broad area of accepting handwritten writing from students -- more responsibly I should add, with a lot of explicit thought about accessibility and inequity pitfalls, and with much more structure than simply doing handwritten submission -- and there are huge benefits to incorporating handwritten work as part of the pedagogical toolkit in the digital age. For many students the change of speed in their thought leads to insights. For others the frustration with speed takes them back to their default writing tech with a set of questions and awareness of practice they didn&#39;t have. For many the alternation of media catalyzes some insights. In almost all cases it is jarring enough that productive thought follows. In no cases is it really relevant as a measure of authenticity. &#xA;&#xA;In a way this isn&#39;t surprising. Writers (outside of any academic or pedagogical context) have a wide variety of habits around their writing, often involving some combination of handwritten drafting and notes turning into some combination of software and computing. Some people dictate. Some people draft with typewriters. Most students simply haven&#39;t thought through those choices the way that people who spend much of their time writing have.&#xA;&#xA;Students are just as diverse in their technological preferences. The only constant I&#39;ve seen with students is that most tend not to have thought a lot about what tools they use for writing. They work on a computer because that&#39;s what is given to them or that&#39;s what it feels like they are supposed to use. They use Google Docs (or Word or perhaps now Notion or note software for some) because that&#39;s what everyone uses. The realization that there are other tools out there, from the structured and specialized to the minimalist and &#34;distraction-free&#34;, is a minor revelation for some. Writing by hand is something that they feel they have graduated out of once they leave elementary school. All of these considerations are essentially social and habitual. Indeed, a lot of the comments I saw on Heid&#39;s piece described how people fell they write better on computers or don&#39;t have the patience for handwriting. That&#39;s all legit and shouldn&#39;t be ignored (and is why Heid&#39;s proposal is naive as it stands). Heid misses the crucial difference here between using technology as habit, because that&#39;s what the teacher says or because that&#39;s the way things have to be structured so we can assess authenticity, and self-aware use of technology. Thwarting cheating isn&#39;t a pedagogical goal; fostering critical and intentional use of technology can and should be. Moreover, controlling your tools is an essential part of writing. Just as students need to learn how to wield a pencil early in elementary school, they need to learn how to wield computers and what computers allow as a requisite part of navigating the kinds of writing and communication that will fill their world.&#xA;&#xA;Most of the assignments I&#39;ve given students that involve handwriting are in some way comparative, structured around the differences or similarities between writing tools.  Writing technology and its consequences should always be up for discussion. The assumption that it isn&#39;t, that our tools are transparent to the act of creation, has been a convenient shortcut in the ritual of assignment submission. We take it as a given that we use such and such range of tools for writing at a particular time. AI tools are a prompt to swing the rhetorical pendulum back and focus on medium as a conduit to message.&#xA;&#xA;All the hype over chatGPT masks a very old issue, perhaps one of the oldest (looking at you Phaedrus). Text generation with large language models is a specialized case of the fundamental question of rhetoric: what difference does it make that we use a particular technology for our words? There&#39;s a continuum and a long (and often studied) history of change, from computers and mobile phones of today back to typewriters, pens, manuscripts, papyrus, and inscription. Beneath the hype, chatGPT demonstrates that we can supercharge the quill so much that it might seem to do the writing for us, almost like magic. But it&#39;s still a pen, a tool, a technology which does something automatically which otherwise had to be done in a different way. &#xA;&#xA;#chatgpt #handwriting #edtech #minimalistedtech #generativeAI]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/zWTfB5kd.jpg" alt=""/></p>

<p>A <a href="https://www.washingtonpost.com/opinions/2022/12/29/handwritten-essays-defeat-chatgpt/">recent opinion piece in WaPo</a> by journalist <a href="http://www.markhamheid.com/">Markham Heid</a> tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I&#39;ve seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I&#39;ve seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn&#39;t have to be and probably shouldn&#39;t be solely transactional. It should be dialogic, and as such, should <em>always</em> involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.</p>



<p>I&#39;ve had students do handwritten writing, particularly in class writing, for many years. So I&#39;ve done many variations and experiments in the broad area of accepting handwritten writing from students — more responsibly I should add, with a lot of explicit thought about accessibility and inequity pitfalls, and with much more structure than simply doing handwritten submission — and there are huge benefits to incorporating handwritten work as part of the pedagogical toolkit in the digital age. For many students the change of speed in their thought leads to insights. For others the frustration with speed takes them back to their default writing tech with a set of questions and awareness of practice they didn&#39;t have. For many the alternation of media catalyzes some insights. In almost all cases it is jarring enough that productive thought follows. In no cases is it really relevant as a measure of authenticity.</p>

<p>In a way this isn&#39;t surprising. Writers (outside of any academic or pedagogical context) have a wide variety of habits around their writing, often involving some combination of handwritten drafting and notes turning into some combination of software and computing. Some people dictate. Some people draft with typewriters. Most students simply haven&#39;t thought through those choices the way that people who spend much of their time writing have.</p>

<p>Students are just as diverse in their technological preferences. The only constant I&#39;ve seen with students is that most tend not to have thought a lot about what tools they use for writing. They work on a computer because that&#39;s what is given to them or that&#39;s what it feels like they are supposed to use. They use Google Docs (or Word or perhaps now Notion or note software for some) because that&#39;s what everyone uses. The realization that there are other tools out there, from the structured and specialized to the minimalist and “distraction-free”, is a minor revelation for some. Writing by hand is something that they feel they have graduated out of once they leave elementary school. All of these considerations are essentially social and habitual. Indeed, a lot of the comments I saw on Heid&#39;s piece described how people fell they write better on computers or don&#39;t have the patience for handwriting. That&#39;s all legit and shouldn&#39;t be ignored (and is why Heid&#39;s proposal is naive as it stands). Heid misses the crucial difference here between using technology as habit, because that&#39;s what the teacher says or because that&#39;s the way things have to be structured so we can assess authenticity, and self-aware use of technology. Thwarting cheating isn&#39;t a pedagogical goal; fostering critical and intentional use of technology can and should be. Moreover, controlling your tools is an essential part of writing. Just as students need to learn how to wield a pencil early in elementary school, they need to learn how to wield computers and what computers allow as a requisite part of navigating the kinds of writing and communication that will fill their world.</p>

<p>Most of the assignments I&#39;ve given students that involve handwriting are in some way comparative, structured around the differences or similarities between writing tools.  Writing technology and its consequences should always be up for discussion. The assumption that it isn&#39;t, that our tools are transparent to the act of creation, has been a convenient shortcut in the ritual of assignment submission. We take it as a given that we use such and such range of tools for writing at a particular time. AI tools are a prompt to swing the rhetorical pendulum back and focus on medium as a conduit to message.</p>

<p>All the hype over chatGPT masks a very old issue, perhaps one of the oldest (looking at you <em>Phaedrus</em>). Text generation with large language models is a specialized case of the fundamental question of rhetoric: what difference does it make that we use a particular technology for our words? There&#39;s a continuum and a long (and often studied) history of change, from computers and mobile phones of today back to typewriters, pens, manuscripts, papyrus, and inscription. Beneath the hype, chatGPT demonstrates that we can supercharge the quill so much that it might seem to do the writing for us, almost like magic. But it&#39;s still a pen, a tool, a technology which does something automatically which otherwise had to be done in a different way.</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:handwriting" class="hashtag"><span>#</span><span class="p-category">handwriting</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeAI" class="hashtag"><span>#</span><span class="p-category">generativeAI</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/pedagogy-and-handwritten-assignments</guid>
      <pubDate>Wed, 04 Jan 2023 17:03:36 +0000</pubDate>
    </item>
    <item>
      <title>New Year, New &#34;AI&#34;</title>
      <link>https://minimalistedtech.org/new-year-new-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;New Year, New &#34;AI&#34;&#xA;&#xA;My new year&#39;s resolution: more writing. Because otherwise the bots win. Or, rather, otherwise the bots won&#39;t have enough fodder to generate ways for students to cheat? Not sure, but I think I need to practice writing like a human.&#xA;&#xA;Apparently there&#39;s been a lot happening on the AI front that kind of got people talking these past few months. In predictable fashion, some teachers are stoked, others are freaked out, and most aren&#39;t quite sure what to do about OpenAI&#39;s big reveal that a massive language model can be coaxed to write a passably decent essay with little effort or significant know-how. &#xA;&#xA;!--more--&#xA;&#xA;I&#39;ve spent most of the past couple of years working in that language AI space, including using gpt and the like. (Not coincidently, I haven&#39;t written much here in that time.) My everyday paradoxical persona is that of constant code-switcher between tech-iest tech practitioner and analog/minimalist tech evangelist. (I suppose that makes my take on these things something like Prof. Moody&#39;s perspective on life in general: constant vigilence.)&#xA;&#xA;Most of the noise around ChatGPT in education starts from the wrong set of assumptions. The debate seems to be around whether or not to use, how to use, how to detect, whether or not this is a good thing or not. &#xA;&#xA;Wrong focus. Assume that readily-available AI can produce coherent text on demand on any subject and that that text will be indistinguishable from a real text that a student or any other person might hand in as part of traditional writing exercises. Start from that assumption. Whether or not ChatGPT or the next models from Meta or Google or Anthropic or any number of other players in this space can do this today, the chances are high that it will be a very short time before the production of coherent text will be a trivial task and widely accessible. &#xA;&#xA;Assume as well that absent any significant legislation and despite the best and most noble attempts of OpenAI or other entities in this space around responsible AI, that some version of generative AI without guardrails on usage will be available. Maybe it won&#39;t be as cutting edge as the others, but it will work, primarily because the cost of training these models will come down and the path to training ones own models with yesterday&#39;s transformers will be laid out clearly enough for secondary players. &#xA;&#xA;The most important question isn&#39;t what educators as individuals and education as an industry should do with today&#39;s technology. The important question is what to do now to plan for tomorrow&#39;s technology. &#xA;&#xA;Today&#39;s technology requires staunching a wound if you have assignments of the sort that are ripe targets for ChatGPT-ification. That&#39;s the reactive mode of security, trying to contain the damage while you buy time to implement more robust solutions. And whatever solutions are put in place in the next few years will quickly be rendered obsolete if in fact we focus only on the current capabilities of these tools. &#xA;&#xA;The more important focus is, as any security professional will tell you, on the longer term, on anticipating threats and heading them off as much as possible. &#xA;&#xA;It happens that starting from an assumption like the one I laid out, where this technology can do everything you might think it could do, and do it well, is also a good way to return to a functional focus. What is the point of an assignment, of a class, of a curriculum? ChatGPT changes nothing about the daily urgency of those fundamental pedagogical questions. It just reshapes the playing field and levels up the kit. &#xA;&#xA;For those who are worried about cheating with ChatGPT or in the weeds on what to do about this potential assignment buster, the first step is the simplest. Forget the technology of today. Return to the fundamental question of what the point of any of this is. And then assume that the technology can do everything you might think it can do and more. Plan your path from there, not defensively or in the weeds of the arresting newness of the tools, but purposefully in a landscape visible in a different light today than it was yesterday.&#xA;&#xA; Technically of course these are large language models and the term &#34;AI&#34; is a bit generous... I have a particular pet peeve when it comes to the ever-expanding use of the term AI in popular parlance and this labeling of language models as AI falls somewhat adjacent.]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/7ga050oP.jpg" alt=""/></p>

<h1 id="new-year-new-ai" id="new-year-new-ai">New Year, New “AI”</h1>

<p>My new year&#39;s resolution: more writing. Because otherwise the bots win. Or, rather, otherwise the bots won&#39;t have enough fodder to generate ways for students to cheat? Not sure, but I think I need to practice writing like a human.</p>

<p>Apparently there&#39;s been a lot happening on the AI[^1] front that kind of got people talking these past few months. In predictable fashion, some teachers are stoked, others are freaked out, and most aren&#39;t quite sure what to do about OpenAI&#39;s big reveal that a massive language model can be coaxed to write a passably decent essay with little effort or significant know-how.</p>



<p>I&#39;ve spent most of the past couple of years working in that language AI space, including using gpt and the like. (Not coincidently, I haven&#39;t written much here in that time.) My everyday paradoxical persona is that of constant code-switcher between tech-iest tech practitioner and analog/minimalist tech evangelist. (I suppose that makes my take on these things something like Prof. Moody&#39;s perspective on life in general: constant vigilence.)</p>

<p>Most of the noise around ChatGPT in education starts from the wrong set of assumptions. The debate seems to be around whether or not to use, how to use, how to detect, whether or not this is a good thing or not.</p>

<p>Wrong focus. Assume that readily-available AI can produce coherent text on demand on any subject and that that text will be indistinguishable from a real text that a student or any other person might hand in as part of traditional writing exercises. Start from that assumption. Whether or not ChatGPT or the next models from Meta or Google or Anthropic or any number of other players in this space can do this today, the chances are high that it will be a very short time before the production of coherent text will be a trivial task and widely accessible.</p>

<p>Assume as well that absent any significant legislation and despite the best and most noble attempts of OpenAI or other entities in this space around responsible AI, that some version of generative AI without guardrails on usage will be available. Maybe it won&#39;t be as cutting edge as the others, but it will work, primarily because the cost of training these models will come down and the path to training ones own models with yesterday&#39;s transformers will be laid out clearly enough for secondary players.</p>

<p>The most important question isn&#39;t what educators as individuals and education as an industry should do with today&#39;s technology. The important question is what to do now to plan for tomorrow&#39;s technology.</p>

<p>Today&#39;s technology requires staunching a wound if you have assignments of the sort that are ripe targets for ChatGPT-ification. That&#39;s the reactive mode of security, trying to contain the damage while you buy time to implement more robust solutions. And whatever solutions are put in place in the next few years will quickly be rendered obsolete if in fact we focus only on the current capabilities of these tools.</p>

<p>The more important focus is, as any security professional will tell you, on the longer term, on anticipating threats and heading them off as much as possible.</p>

<p>It happens that starting from an assumption like the one I laid out, where this technology can do everything you might think it could do, and do it well, is also a good way to return to a functional focus. What is the point of an assignment, of a class, of a curriculum? ChatGPT changes nothing about the daily urgency of those fundamental pedagogical questions. It just reshapes the playing field and levels up the kit.</p>

<p>For those who are worried about cheating with ChatGPT or in the weeds on what to do about this potential assignment buster, the first step is the simplest. Forget the technology of today. Return to the fundamental question of what the point of any of this is. And then assume that the technology can do everything you might think it can do and more. Plan your path from there, not defensively or in the weeds of the arresting newness of the tools, but purposefully in a landscape visible in a different light today than it was yesterday.</p>

<p>[^1] Technically of course these are large language models and the term “AI” is a bit generous... I have a <a href="https://minimalistedtech.com/edtech-rant-of-the-day-ai-that-isnt-really-ai">particular pet peeve</a> when it comes to the ever-expanding use of the term AI in popular parlance and this labeling of language models as AI falls somewhat adjacent.</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/new-year-new-ai</guid>
      <pubDate>Tue, 03 Jan 2023 02:40:10 +0000</pubDate>
    </item>
    <item>
      <title>I am not our users</title>
      <link>https://minimalistedtech.org/i-am-not-our-users?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand.  It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.&#xA;&#xA;I hated it. I found it abhorrent.&#xA;!--more--&#xA;&#xA;It struck me as everything that was misguided about technological approaches to education, turning what should, in a learning experience, be a welcoming and open space for learning into a competitive reward system based on junk metrics of who participates the most. I immediately knew why I reacted this way. Besides the fact that I&#39;m old and cranky and have seen this too many times before, it felt antithetical to my values as a teacher. Competition has its place, but this was just a system for imposing judgement and extracting coerced &#34;engagement&#34;. &#xA;&#xA;What confused me was whether this was something that the designers and the users they had researched this with actually wanted or, at the least, thought they wanted. So that&#39;s what I asked, about the user research and then directly of the designers as members of that demographic. As part of the target users for this sort of experience, do they really want to be measured in this way? The answer was a little surprising. First they said that both they and the people they talked to seemed to say that they could just ignore the features I found objectionable. That is, they just wouldn&#39;t take the competitive part that seriously if they didn&#39;t want to. That struck me as self-defeating for pitching a design idea, but so be it. On the other hand, they just took it for granted that the only way to get &#34;engagement&#34; was punitive. To put the charitable spin on it, I am a geezer who gets turned off by the way that apps and social platforms are constantly compelling judgement. But under 30s live in that world of constant peer judgement, both as young people and as gen-z, so it&#39;s not a big deal to them that they get marked up, for better or worse, by their peers. I&#39;m willing to concede that they&#39;re used to a social media environment which I find foreign and overwhelming. I&#39;m an odd duck in my own peer group in that respect. But -- and this was the thrust of my objection and criticism -- why should we create that environment? Why should we perpetuate it? Can&#39;t we do better?&#xA;&#xA;There is a constant danger, both in education and in the technological apparatus of learning, that we perpetuate the biases and damaging expectations of our own training. I&#39;ve seen teachers starting out who were doing more or less what they saw their own teachers do. And it has been bad, not because of the teacher just starting out, but because they inherited as normal and acceptable practices from a less than stellar model. It feels like this is what I was seeing in those designs, a form of echoing back, with minor modifications, what these young designers had been taught to accept as an educational app. This is what educational platforms look like to them, full of cheap interactions that delight and drive up meaningless metrics for engagement straight from the social media playbook of time on platform, number of clicks, and volume of response. &#xA;&#xA;But what about deep thought? What about meaningful interactions? What about the time between a thought and the click of learning? We could optimize for that. We could make our metrics about that. Engagement is itself a proxy metric that purports to be about learning but is, was, and always will be a hack. The assumption -- the article of faith -- is that amounts of clicks or amounts of views or time on platform bears some linear-like relationship to learning. But let&#39;s step back. That&#39;s one particular scenario where learning may happen. It&#39;s the type of learning that can happen with maximum visibility. But it&#39;s by far not the norm and maybe not even the most efficient mechanism. Some learning might happen by rote. Some by interaction. And some  -- a lot I think -- happens in the time in between. The effects and indicators of that kind of deep learning aren&#39;t clicks or steady eyeballs or -- god help us -- staring at a zoom screen. They might be things like sharing what you&#39;ve learned. or perhaps you take what you&#39;ve learned to another domain. Or you improve your speed at applying what you&#39;ve learned. &#xA;&#xA;We could optimize for meaningful, deep learning in educational technologies. We must choose to do so and we must choose carefully the goals which we set as indicators for that learning. &#xA;&#xA;If we aren&#39;t intentional about that, then we end up with designs that double down on the status quo, not because it is efficacious or valuable, but because it is the pattern of accepted behaviors. After all, as these young designers told me, they were used to the idea of others commenting on them. They saw it as normal and ok. So of course they would deliver something that played to patterns of current edtech, something that comfortably fit in, that was in line with what everyone else was doing. &#xA;&#xA;There&#39;s a generational divide there. It&#39;s been more than a few years since I was a student. I grew up in that generation that is at home with technology but remembers the time before it was ubiquitous in personal life. I was struck that they drew comfort from knowing where they stood in relation to others. That seemed profoundly depressing to me, but also perhaps an indicator of what I might naively hope is the wisdom of age, as people tend to shed those vanities as they get older. So it may be that the fault is mine, that I&#39;m not able to inhabit the minds of our users. For them judgement matters. They expect it and may even crave it. &#xA;&#xA;But the teacher in me interjects at this point. Young people always think they know what they want. And sometimes they are wrong. We don&#39;t have to build a system of constant judgment and performance. We can build something different. &#xA;&#xA;#minimalistedtech #teaching #edtech&#xA;-----------&#xA;note: Despite language of &#34;geezer&#34; and &#34;old&#34; above, I am in fact only of moderately non-young years. Long exposure to college students of unchanging age has, perhaps, made the perception of age difference hit home harder than it might otherwise. ]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/pgf7IjR2.jpg" alt=""/></p>

<p>Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand.  It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.</p>

<p>I hated it. I found it abhorrent.
</p>

<p>It struck me as everything that was misguided about technological approaches to education, turning what should, in a learning experience, be a welcoming and open space for learning into a competitive reward system based on junk metrics of who participates the most. I immediately knew why I reacted this way. Besides the fact that I&#39;m old and cranky and have seen this too many times before, it felt antithetical to my values as a teacher. Competition has its place, but this was just a system for imposing judgement and extracting coerced “engagement”.</p>

<p>What confused me was whether this was something that the designers and the users they had researched this with actually wanted or, at the least, thought they wanted. So that&#39;s what I asked, about the user research and then directly of the designers as members of that demographic. As part of the target users for this sort of experience, do they really want to be measured in this way? The answer was a little surprising. First they said that both they and the people they talked to seemed to say that they could just ignore the features I found objectionable. That is, they just wouldn&#39;t take the competitive part that seriously if they didn&#39;t want to. That struck me as self-defeating for pitching a design idea, but so be it. On the other hand, they just took it for granted that the only way to get “engagement” was punitive. To put the charitable spin on it, I am a geezer who gets turned off by the way that apps and social platforms are constantly compelling judgement. But under 30s live in that world of constant peer judgement, both as young people and as gen-z, so it&#39;s not a big deal to them that they get marked up, for better or worse, by their peers. I&#39;m willing to concede that they&#39;re used to a social media environment which I find foreign and overwhelming. I&#39;m an odd duck in my own peer group in that respect. But — and this was the thrust of my objection and criticism — why should we create that environment? Why should we perpetuate it? Can&#39;t we do better?</p>

<p>There is a constant danger, both in education and in the technological apparatus of learning, that we perpetuate the biases and damaging expectations of our own training. I&#39;ve seen teachers starting out who were doing more or less what they saw their own teachers do. And it has been bad, not because of the teacher just starting out, but because they inherited as normal and acceptable practices from a less than stellar model. It feels like this is what I was seeing in those designs, a form of echoing back, with minor modifications, what these young designers had been taught to accept as an educational app. This is what educational platforms look like to them, full of cheap interactions that delight and drive up meaningless metrics for engagement straight from the social media playbook of time on platform, number of clicks, and volume of response.</p>

<p>But what about deep thought? What about <em>meaningful</em> interactions? What about the time between a thought and the click of learning? We <strong>could</strong> optimize for that. We <strong>could</strong> make our metrics about that. Engagement is itself a proxy metric that purports to be about learning but is, was, and always will be a hack. The assumption — the article of faith — is that amounts of clicks or amounts of views or time on platform bears some linear-like relationship to learning. But let&#39;s step back. That&#39;s one particular scenario where learning may happen. It&#39;s the type of learning that can happen with maximum visibility. But it&#39;s by far not the norm and maybe not even the most efficient mechanism. Some learning might happen by rote. Some by interaction. And some  — a lot I think — happens in the time in between. The effects and indicators of that kind of deep learning aren&#39;t clicks or steady eyeballs or — god help us — staring at a zoom screen. They might be things like sharing what you&#39;ve learned. or perhaps you take what you&#39;ve learned to another domain. Or you improve your speed at applying what you&#39;ve learned.</p>

<p>We <strong>could</strong> optimize for meaningful, deep learning in educational technologies. We must choose to do so and we must choose carefully the goals which we set as indicators for that learning.</p>

<p>If we aren&#39;t intentional about that, then we end up with designs that double down on the status quo, not because it is efficacious or valuable, but because it is the pattern of accepted behaviors. After all, as these young designers told me, they were used to the idea of others commenting on them. They saw it as normal and ok. So of course they would deliver something that played to patterns of current edtech, something that comfortably fit in, that was in line with what everyone else was doing.</p>

<p>There&#39;s a generational divide there. It&#39;s been more than a few years since I was a student. I grew up in that generation that is at home with technology but remembers the time before it was ubiquitous in personal life. I was struck that they drew comfort from knowing where they stood in relation to others. That seemed profoundly depressing to me, but also perhaps an indicator of what I might naively hope is the wisdom of age, as people tend to shed those vanities as they get older. So it may be that the fault is mine, that I&#39;m not able to inhabit the minds of our users. For them judgement matters. They expect it and may even crave it.</p>

<p>But the teacher in me interjects at this point. Young people always think they know what they want. And sometimes they are wrong. We don&#39;t have to build a system of constant judgment and performance. We can build something different.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>

<hr/>

<p>note: Despite language of “geezer” and “old” above, I am in fact only of moderately non-young years. Long exposure to college students of unchanging age has, perhaps, made the perception of age difference hit home harder than it might otherwise.</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/i-am-not-our-users</guid>
      <pubDate>Sun, 20 Mar 2022 14:33:29 +0000</pubDate>
    </item>
  </channel>
</rss>