<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>generativeAI &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:generativeAI</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Fri, 17 Apr 2026 06:42:27 +0000</pubDate>
    
    <item>
      <title>Finding Value in the Impending Tsunami of Generated Content</title>
      <link>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The garbage pile of generative &#34;AI&#34;&#xA;&#xA;The generative &#34;AI&#34; hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked. &#xA;&#xA;!--more--&#xA;&#xA;One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need. &#xA;&#xA;The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative &#34;AI&#34; toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of &#34;knowledge&#34; generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery. &#xA;&#xA;Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html is one of many such calls for increased digital curation of data. The variety of startups applying generative &#34;AI&#34; to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google &#34;sequoia generative ai market map&#34; or similar; https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.&#xA;&#xA;There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative &#34;AI&#34; can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time. &#xA;&#xA;Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative &#34;AI&#34; will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge. &#xA;&#xA;This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as &#34;truth&#34;, at least as an asymptotic goal if not reality? &#xA;&#xA;It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience. &#xA;&#xA;In more optimistic moments I wonder whether the value of generative &#34;AI&#34; can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus. &#xA;&#xA;#minimalistedtech #generativeai #chatgpt #edtech #education #learning]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/FKMg3Rsd.jpg" alt="The garbage pile of generative &#34;AI&#34;"/></p>

<p>The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.</p>



<p>One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need.</p>

<p>The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative “AI” toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of “knowledge” generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery.</p>

<p>Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, <a href="http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html">http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html</a> is one of many such calls for increased digital curation of data. The variety of startups applying generative “AI” to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google “sequoia generative ai market map” or similar; <a href="https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.">https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.</a>) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.</p>

<p>There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative “AI” can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time.</p>

<p>Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative “AI” will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge.</p>

<p>This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as “truth”, at least as an asymptotic goal if not reality?</p>

<p>It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience.</p>

<p>In more optimistic moments I wonder whether the value of generative “AI” can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeai" class="hashtag"><span>#</span><span class="p-category">generativeai</span></a> <a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content</guid>
      <pubDate>Sun, 15 Jan 2023 19:02:04 +0000</pubDate>
    </item>
    <item>
      <title>Pedagogy and Handwritten Assignments</title>
      <link>https://minimalistedtech.org/pedagogy-and-handwritten-assignments?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;A recent opinion piece in WaPo by journalist Markham Heid tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I&#39;ve seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I&#39;ve seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn&#39;t have to be and probably shouldn&#39;t be solely transactional. It should be dialogic, and as such, should always involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.&#xA;&#xA;!--more--&#xA;&#xA;I&#39;ve had students do handwritten writing, particularly in class writing, for many years. So I&#39;ve done many variations and experiments in the broad area of accepting handwritten writing from students -- more responsibly I should add, with a lot of explicit thought about accessibility and inequity pitfalls, and with much more structure than simply doing handwritten submission -- and there are huge benefits to incorporating handwritten work as part of the pedagogical toolkit in the digital age. For many students the change of speed in their thought leads to insights. For others the frustration with speed takes them back to their default writing tech with a set of questions and awareness of practice they didn&#39;t have. For many the alternation of media catalyzes some insights. In almost all cases it is jarring enough that productive thought follows. In no cases is it really relevant as a measure of authenticity. &#xA;&#xA;In a way this isn&#39;t surprising. Writers (outside of any academic or pedagogical context) have a wide variety of habits around their writing, often involving some combination of handwritten drafting and notes turning into some combination of software and computing. Some people dictate. Some people draft with typewriters. Most students simply haven&#39;t thought through those choices the way that people who spend much of their time writing have.&#xA;&#xA;Students are just as diverse in their technological preferences. The only constant I&#39;ve seen with students is that most tend not to have thought a lot about what tools they use for writing. They work on a computer because that&#39;s what is given to them or that&#39;s what it feels like they are supposed to use. They use Google Docs (or Word or perhaps now Notion or note software for some) because that&#39;s what everyone uses. The realization that there are other tools out there, from the structured and specialized to the minimalist and &#34;distraction-free&#34;, is a minor revelation for some. Writing by hand is something that they feel they have graduated out of once they leave elementary school. All of these considerations are essentially social and habitual. Indeed, a lot of the comments I saw on Heid&#39;s piece described how people fell they write better on computers or don&#39;t have the patience for handwriting. That&#39;s all legit and shouldn&#39;t be ignored (and is why Heid&#39;s proposal is naive as it stands). Heid misses the crucial difference here between using technology as habit, because that&#39;s what the teacher says or because that&#39;s the way things have to be structured so we can assess authenticity, and self-aware use of technology. Thwarting cheating isn&#39;t a pedagogical goal; fostering critical and intentional use of technology can and should be. Moreover, controlling your tools is an essential part of writing. Just as students need to learn how to wield a pencil early in elementary school, they need to learn how to wield computers and what computers allow as a requisite part of navigating the kinds of writing and communication that will fill their world.&#xA;&#xA;Most of the assignments I&#39;ve given students that involve handwriting are in some way comparative, structured around the differences or similarities between writing tools.  Writing technology and its consequences should always be up for discussion. The assumption that it isn&#39;t, that our tools are transparent to the act of creation, has been a convenient shortcut in the ritual of assignment submission. We take it as a given that we use such and such range of tools for writing at a particular time. AI tools are a prompt to swing the rhetorical pendulum back and focus on medium as a conduit to message.&#xA;&#xA;All the hype over chatGPT masks a very old issue, perhaps one of the oldest (looking at you Phaedrus). Text generation with large language models is a specialized case of the fundamental question of rhetoric: what difference does it make that we use a particular technology for our words? There&#39;s a continuum and a long (and often studied) history of change, from computers and mobile phones of today back to typewriters, pens, manuscripts, papyrus, and inscription. Beneath the hype, chatGPT demonstrates that we can supercharge the quill so much that it might seem to do the writing for us, almost like magic. But it&#39;s still a pen, a tool, a technology which does something automatically which otherwise had to be done in a different way. &#xA;&#xA;#chatgpt #handwriting #edtech #minimalistedtech #generativeAI]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/zWTfB5kd.jpg" alt=""/></p>

<p>A <a href="https://www.washingtonpost.com/opinions/2022/12/29/handwritten-essays-defeat-chatgpt/">recent opinion piece in WaPo</a> by journalist <a href="http://www.markhamheid.com/">Markham Heid</a> tackles the ChatGPT teacher freakout by proposing handwritten essays as a way to blunt the inauthenticity threat posed by our emerging AI super-lords. I&#39;ve seen the requisite pushback on this piece around accessibility, but I think the bulk of criticism (at least what I&#39;ve seen) still misses the most important point. If we treat writing assignments as transactional, then tools like ChatGPT (or the emerging assisted writing players, whether SudoWrite or Lex, etc.) may seem like an existential threat. Generative AI may well kill off most transactional writing (not just in education. I suspect boilerplate longform writing will increasingly be a matter of text completion). I have no problem with that. But writing as part of pedagogy doesn&#39;t have to be and probably shouldn&#39;t be solely transactional. It should be dialogic, and as such, should <em>always</em> involve deep engagement with the medium along with the message. ChatGPT just makes urgent what might have otherwise been too easy to ignore.</p>



<p>I&#39;ve had students do handwritten writing, particularly in class writing, for many years. So I&#39;ve done many variations and experiments in the broad area of accepting handwritten writing from students — more responsibly I should add, with a lot of explicit thought about accessibility and inequity pitfalls, and with much more structure than simply doing handwritten submission — and there are huge benefits to incorporating handwritten work as part of the pedagogical toolkit in the digital age. For many students the change of speed in their thought leads to insights. For others the frustration with speed takes them back to their default writing tech with a set of questions and awareness of practice they didn&#39;t have. For many the alternation of media catalyzes some insights. In almost all cases it is jarring enough that productive thought follows. In no cases is it really relevant as a measure of authenticity.</p>

<p>In a way this isn&#39;t surprising. Writers (outside of any academic or pedagogical context) have a wide variety of habits around their writing, often involving some combination of handwritten drafting and notes turning into some combination of software and computing. Some people dictate. Some people draft with typewriters. Most students simply haven&#39;t thought through those choices the way that people who spend much of their time writing have.</p>

<p>Students are just as diverse in their technological preferences. The only constant I&#39;ve seen with students is that most tend not to have thought a lot about what tools they use for writing. They work on a computer because that&#39;s what is given to them or that&#39;s what it feels like they are supposed to use. They use Google Docs (or Word or perhaps now Notion or note software for some) because that&#39;s what everyone uses. The realization that there are other tools out there, from the structured and specialized to the minimalist and “distraction-free”, is a minor revelation for some. Writing by hand is something that they feel they have graduated out of once they leave elementary school. All of these considerations are essentially social and habitual. Indeed, a lot of the comments I saw on Heid&#39;s piece described how people fell they write better on computers or don&#39;t have the patience for handwriting. That&#39;s all legit and shouldn&#39;t be ignored (and is why Heid&#39;s proposal is naive as it stands). Heid misses the crucial difference here between using technology as habit, because that&#39;s what the teacher says or because that&#39;s the way things have to be structured so we can assess authenticity, and self-aware use of technology. Thwarting cheating isn&#39;t a pedagogical goal; fostering critical and intentional use of technology can and should be. Moreover, controlling your tools is an essential part of writing. Just as students need to learn how to wield a pencil early in elementary school, they need to learn how to wield computers and what computers allow as a requisite part of navigating the kinds of writing and communication that will fill their world.</p>

<p>Most of the assignments I&#39;ve given students that involve handwriting are in some way comparative, structured around the differences or similarities between writing tools.  Writing technology and its consequences should always be up for discussion. The assumption that it isn&#39;t, that our tools are transparent to the act of creation, has been a convenient shortcut in the ritual of assignment submission. We take it as a given that we use such and such range of tools for writing at a particular time. AI tools are a prompt to swing the rhetorical pendulum back and focus on medium as a conduit to message.</p>

<p>All the hype over chatGPT masks a very old issue, perhaps one of the oldest (looking at you <em>Phaedrus</em>). Text generation with large language models is a specialized case of the fundamental question of rhetoric: what difference does it make that we use a particular technology for our words? There&#39;s a continuum and a long (and often studied) history of change, from computers and mobile phones of today back to typewriters, pens, manuscripts, papyrus, and inscription. Beneath the hype, chatGPT demonstrates that we can supercharge the quill so much that it might seem to do the writing for us, almost like magic. But it&#39;s still a pen, a tool, a technology which does something automatically which otherwise had to be done in a different way.</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:handwriting" class="hashtag"><span>#</span><span class="p-category">handwriting</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeAI" class="hashtag"><span>#</span><span class="p-category">generativeAI</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/pedagogy-and-handwritten-assignments</guid>
      <pubDate>Wed, 04 Jan 2023 17:03:36 +0000</pubDate>
    </item>
  </channel>
</rss>