<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>education &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:education</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Thu, 16 Apr 2026 11:11:55 +0000</pubDate>
    
    <item>
      <title>Mistaken Oracles in the Future of AI</title>
      <link>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely &#34;oracular AI&#34;. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.&#xA;&#xA;!--more--&#xA;&#xA;From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):&#xA;  An Oracle AI is a regularly proposed solution to the problem of developing Friendly AI. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by Nick Bostrom.&#xA;&#xA;Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but &#34;has no ability to act in the world.&#34; The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals. &#xA;&#xA;Two things to note here:&#xA;This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs. &#xA;This is completely backwards view of oracles.&#xA;&#xA;The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term &#34;oracle&#34; has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.&#xA;&#xA;(Since the term &#34;oracle&#34; generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, Ancient Greek Divination, Michael Flower, The Seer in Ancient Greece, Nissinen&#39;s Ancient Prophecy, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on prophecy in the Renaissance.)&#xA;&#xA;Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper. &#xA;&#xA;That is all to say that oracularism is in fact a great metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense. &#xA;&#xA;This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses. &#xA;&#xA;Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless. &#xA;&#xA;#chatgpt #ai #edtech #aiineducation #edtech #education]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/5lUNSFVp.jpg" alt=""/></p>

<p>It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely <a href="https://www.lesswrong.com/tag/oracle-ai">“oracular AI”</a>. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.</p>



<p>From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):
&gt; An <strong>Oracle AI</strong> is a regularly proposed solution to the problem of developing <a href="https://wiki.lesswrong.com/wiki/Friendly_AI">Friendly AI</a>. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by <a href="https://www.lesswrong.com/tag/nick-bostrom">Nick Bostrom</a>.</p>

<p>Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but “has no ability to act in the world.” The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals.</p>

<p>Two things to note here:
1. This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs.
2. This is completely backwards view of oracles.</p>

<p>The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term “oracle” has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.</p>

<p>(Since the term “oracle” generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, <em>Ancient Greek Divination</em>, Michael Flower, <em>The Seer in Ancient Greece</em>, Nissinen&#39;s <em>Ancient Prophecy</em>, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on <a href="https://www.oxfordbibliographies.com/display/document/obo-9780195399301/obo-9780195399301-0501.xml">prophecy in the Renaissance</a>.)</p>

<p>Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper.</p>

<p>That is all to say that oracularism is in fact a <em>great</em> metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense.</p>

<p>This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses.</p>

<p>Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless.</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:aiineducation" class="hashtag"><span>#</span><span class="p-category">aiineducation</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai</guid>
      <pubDate>Wed, 18 Jan 2023 17:42:43 +0000</pubDate>
    </item>
    <item>
      <title>Finding Value in the Impending Tsunami of Generated Content</title>
      <link>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The garbage pile of generative &#34;AI&#34;&#xA;&#xA;The generative &#34;AI&#34; hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked. &#xA;&#xA;!--more--&#xA;&#xA;One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need. &#xA;&#xA;The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative &#34;AI&#34; toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of &#34;knowledge&#34; generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery. &#xA;&#xA;Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html is one of many such calls for increased digital curation of data. The variety of startups applying generative &#34;AI&#34; to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google &#34;sequoia generative ai market map&#34; or similar; https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.&#xA;&#xA;There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative &#34;AI&#34; can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time. &#xA;&#xA;Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative &#34;AI&#34; will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge. &#xA;&#xA;This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as &#34;truth&#34;, at least as an asymptotic goal if not reality? &#xA;&#xA;It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience. &#xA;&#xA;In more optimistic moments I wonder whether the value of generative &#34;AI&#34; can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus. &#xA;&#xA;#minimalistedtech #generativeai #chatgpt #edtech #education #learning]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/FKMg3Rsd.jpg" alt="The garbage pile of generative &#34;AI&#34;"/></p>

<p>The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.</p>



<p>One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need.</p>

<p>The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative “AI” toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of “knowledge” generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery.</p>

<p>Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, <a href="http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html">http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html</a> is one of many such calls for increased digital curation of data. The variety of startups applying generative “AI” to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google “sequoia generative ai market map” or similar; <a href="https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.">https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.</a>) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.</p>

<p>There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative “AI” can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time.</p>

<p>Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative “AI” will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge.</p>

<p>This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as “truth”, at least as an asymptotic goal if not reality?</p>

<p>It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience.</p>

<p>In more optimistic moments I wonder whether the value of generative “AI” can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeai" class="hashtag"><span>#</span><span class="p-category">generativeai</span></a> <a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content</guid>
      <pubDate>Sun, 15 Jan 2023 19:02:04 +0000</pubDate>
    </item>
    <item>
      <title>Humans in the Loop and Agency</title>
      <link>https://minimalistedtech.org/humans-in-the-loop-and-agency?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[human in the loop, made with DALL-E&#xA;&#xA;Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative &#34;AI&#34; and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)&#xA;&#xA;A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.&#xA;&#xA;!--more--&#xA;&#xA;One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.&#xA;(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/)&#xA;&#xA;Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts? &#xA;&#xA;For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. &#34;You are a psychologist and the following is a conversation with a patient&#34;) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier here.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words -- of assuming that everything is keywordable -- and ways of asking questions. &#xA;&#xA;Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however. &#xA;&#xA;Learning requires students gain a sense of agency in the world. Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?&#xA;&#xA;Hence my concern. Human in the loop systems can provide a false sense of agency. Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like &#34;personalized learning&#34;, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?&#xA;&#xA;#chatgpt #education #teaching #ai #edtech&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/ByUkC3Nt.png" alt="human in the loop, made with DALL-E"/></p>

<p>Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)</p>

<p>A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.</p>



<p>One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.
(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see <a href="https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/">https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/</a>)</p>

<p>Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts?</p>

<p>For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. “You are a psychologist and the following is a conversation with a patient”) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier <a href="https://minimalistedtech.com/pretending-to-teach">here</a>.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words — of assuming that everything is keywordable — and ways of asking questions.</p>

<p>Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however.</p>

<p><strong>Learning requires students gain a sense of agency in the world.</strong> Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?</p>

<p>Hence my concern. <em>Human in the loop systems can provide a false sense of agency.</em> Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like “personalized learning”, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/humans-in-the-loop-and-agency</guid>
      <pubDate>Sun, 15 Jan 2023 06:14:05 +0000</pubDate>
    </item>
    <item>
      <title>De-cluttered Pedagogy and Embodied Energy</title>
      <link>https://minimalistedtech.org/de-cluttered-pedagogy-and-embodied-energy?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;De-cluttered Pedagogy and Embodied Energy&#xA;&#xA;Not minimalist&#xA;&#xA;This BBC piece about the origins of the de-cluttered household caught my eye: https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home&#xA;It&#39;s a swift and effective overview of architectural minimalism and the cyclical waxing and waning of fashion for de-cluttered interiors. The pendulum has swung towards maximalism and eclecticism for a bit now and perhaps there are hints that is starting to swing back. I suspect the article presents too linear a summary, as there seem always holdouts that can linger on until suddenly becoming &#34;in&#34; again as the pendulum swings back. But this piece got me thinking about how much minimalism is cyclical in other areas outside its home base of architecture and design. &#xA;&#xA;!--more--&#xA;&#xA;For teaching and learning, one could perhaps think of Holt (unschooling) or Dewey in terms of minimalist practice. Or, going way back, Socratic method, which Plato is at pains to show off as distinctive, often seems minimalist today. An interesting feature of minimalism in education is the way that it feels more of a shifting target impacted by the passage of time than does architectural minimalism. Less stuff is less. Clean walls or lines are empty. But minimalism in pedagogy isn&#39;t absence or emptiness or simply less stuff. It might mean making the most out of as little as possible, but that&#39;s something a bit different than the kind of minimalism that is so tangible in design. Things that seem minimalist now were almost always not so when first (or most famously) practiced. Initially they may have been radical or driven by ideas and ideology outside of pedagogy (similar to the way design minimalism was impacted by philosophy, religion, and the like), but self-conscious minimalism in education seems more often an epiphenomenon of pedagogical criticism. A good way to act contrary to prevailing practice is to go back to some sort of putative foundation or, alternatively, to remove elements that others take for granted.&#xA;&#xA;There&#39;s value in that contrarianism, but I&#39;m struck by how much more important the other strand of minimalism might be, namely the way minimalism can overlap with sustainability. Minimalism in education can have greater overlap with sustainability than design minimalism tends to do. As the article I cited above notes, some minimalist design elements have very low embodied energy (e.g. rocks) but many are energy intensive to manufacture and transport (e.g. steel). The two are not always aligned in design.&#xA;&#xA;This idea of embodied energy is a useful concept when applied to pedagogy. How much energy are we expending on tools relative to pedagogical value? What&#39;s the embodied energy, in terms of attention, preparation time, or training time, relative to the eventual outcome? &#xA;&#xA;Our pedagogical workspace isn&#39;t just the physical or the object. Stuff isn&#39;t the only measure. For learning, the &#34;stuff&#34; can be squishier quantities: time, attention, pace. Computing has its own &#34;stuff&#34; to add to the equation:  electricity, money, human time (again) for setup and maintenance. All of that combined makes up the embodied energy of pedagogy. As such, there is value in keeping that as low as possible, not as an aesthetic choice, but because in most cases those are exactly the limited resources effective pedagogy requires. &#xA;&#xA;#minimalism #minimalistedtech #sustainableeducation #sustainabletech #education&#xA;&#xA;postscript: This is not a post about ChatGPT, but I would be remiss not to mention that one under-discussed aspect of large language models for use in education is sustainability. These models are expensive to develop both in hardware and in energy costs. The fact that OpenAI is currently showing off ChatGPT &#34;for free&#34; (= they want your data) may be masking the fact that these models are most definitely not free. More on that in a later post.]]&gt;</description>
      <content:encoded><![CDATA[<h1 id="de-cluttered-pedagogy-and-embodied-energy" id="de-cluttered-pedagogy-and-embodied-energy">De-cluttered Pedagogy and Embodied Energy</h1>

<p><img src="https://i.snap.as/qaM3ZKwH.jpg" alt="Not minimalist"/></p>

<p>This BBC piece about the origins of the de-cluttered household caught my eye: <a href="https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home">https://www.bbc.com/culture/article/20230103-the-historical-origins-of-the-de-cluttered-home</a>
It&#39;s a swift and effective overview of architectural minimalism and the cyclical waxing and waning of fashion for de-cluttered interiors. The pendulum has swung towards maximalism and eclecticism for a bit now and perhaps there are hints that is starting to swing back. I suspect the article presents too linear a summary, as there seem always holdouts that can linger on until suddenly becoming “in” again as the pendulum swings back. But this piece got me thinking about how much minimalism is cyclical in other areas outside its home base of architecture and design.</p>



<p>For teaching and learning, one could perhaps think of Holt (unschooling) or Dewey in terms of minimalist practice. Or, going way back, Socratic method, which Plato is at pains to show off as distinctive, often seems minimalist today. An interesting feature of minimalism in education is the way that it feels more of a shifting target impacted by the passage of time than does architectural minimalism. Less stuff is less. Clean walls or lines are empty. But minimalism in pedagogy isn&#39;t absence or emptiness or simply less stuff. It might mean making the most out of as little as possible, but that&#39;s something a bit different than the kind of minimalism that is so tangible in design. Things that seem minimalist now were almost always not so when first (or most famously) practiced. Initially they may have been radical or driven by ideas and ideology outside of pedagogy (similar to the way design minimalism was impacted by philosophy, religion, and the like), but self-conscious minimalism in education seems more often an epiphenomenon of pedagogical criticism. A good way to act contrary to prevailing practice is to go back to some sort of putative foundation or, alternatively, to remove elements that others take for granted.</p>

<p>There&#39;s value in that contrarianism, but I&#39;m struck by how much more important the other strand of minimalism might be, namely the way minimalism can overlap with sustainability. Minimalism in education can have greater overlap with sustainability than design minimalism tends to do. As the article I cited above notes, some minimalist design elements have very low embodied energy (e.g. rocks) but many are energy intensive to manufacture and transport (e.g. steel). The two are not always aligned in design.</p>

<p>This idea of embodied energy is a useful concept when applied to pedagogy. How much energy are we expending on tools relative to pedagogical value? What&#39;s the embodied energy, in terms of attention, preparation time, or training time, relative to the eventual outcome?</p>

<p>Our pedagogical workspace isn&#39;t just the physical or the object. Stuff isn&#39;t the only measure. For learning, the “stuff” can be squishier quantities: time, attention, pace. Computing has its own “stuff” to add to the equation:  electricity, money, human time (again) for setup and maintenance. All of that combined makes up the embodied energy of pedagogy. As such, there is value in keeping that as low as possible, not as an aesthetic choice, but because in most cases those are exactly the limited resources effective pedagogy requires.</p>

<p><a href="https://minimalistedtech.org/tag:minimalism" class="hashtag"><span>#</span><span class="p-category">minimalism</span></a> <a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:sustainableeducation" class="hashtag"><span>#</span><span class="p-category">sustainableeducation</span></a> <a href="https://minimalistedtech.org/tag:sustainabletech" class="hashtag"><span>#</span><span class="p-category">sustainabletech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a></p>

<p><em>postscript: This is not a post about ChatGPT, but I would be remiss not to mention that one under-discussed aspect of large language models for use in education is sustainability. These models are expensive to develop both in hardware and in energy costs. The fact that OpenAI is currently showing off ChatGPT “for free” (= they want your data) may be masking the fact that these models are most definitely not free. More on that in a later post.</em></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/de-cluttered-pedagogy-and-embodied-energy</guid>
      <pubDate>Thu, 05 Jan 2023 18:45:00 +0000</pubDate>
    </item>
  </channel>
</rss>