<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>learning &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:learning</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Fri, 08 May 2026 14:19:34 +0000</pubDate>
    
    <item>
      <title>Finding Value in the Impending Tsunami of Generated Content</title>
      <link>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The garbage pile of generative &#34;AI&#34;&#xA;&#xA;The generative &#34;AI&#34; hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked. &#xA;&#xA;!--more--&#xA;&#xA;One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need. &#xA;&#xA;The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative &#34;AI&#34; toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of &#34;knowledge&#34; generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery. &#xA;&#xA;Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html is one of many such calls for increased digital curation of data. The variety of startups applying generative &#34;AI&#34; to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google &#34;sequoia generative ai market map&#34; or similar; https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.&#xA;&#xA;There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative &#34;AI&#34; can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time. &#xA;&#xA;Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative &#34;AI&#34; will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge. &#xA;&#xA;This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as &#34;truth&#34;, at least as an asymptotic goal if not reality? &#xA;&#xA;It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience. &#xA;&#xA;In more optimistic moments I wonder whether the value of generative &#34;AI&#34; can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus. &#xA;&#xA;#minimalistedtech #generativeai #chatgpt #edtech #education #learning]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/FKMg3Rsd.jpg" alt="The garbage pile of generative &#34;AI&#34;"/></p>

<p>The generative “AI” hype cycle has been at peak hype for the past month or so and it follows completely predictable tech patterns. Hypers tout all the amazing miraculous things that will be possible; doubters wonder aloud whether these things will fail to deliver on their utopian promises (because these things always fall short of their utopian promises), and most of the obvious consequences and outcomes get overlooked.</p>



<p>One such obvious consequence is that there are tidal waves of bullshittery about to hit our shores. (This first wave is a minor high tide compared to what is coming....) Reconstituted text, images, video, audio, avatars and fake people are pretty much guaranteed across a wide variety of areas, a landscape where education is only one small province. We won&#39;t be able to tell real from fake or, perhaps more troubling, I don&#39;t think we&#39;ll care so long as it scratches the right itch or feeds the right need.</p>

<p>The question across those domains will be whether we value authenticity. For things like boilerplate email, sales copy, code, and a wealth of other activities, I think the answer will be that authenticity doesn&#39;t matter that much. But that&#39;s where education is different. Authenticity should matter, not because of the habitual exercise of needing to assign grades to work that was not plagiarized or copied or whatever other vice one can ascribe, but because without authenticity there is no learning. Faking it is great for getting to the ends. But education is about the means; ends (tests, essays, etc) have always been imperfect proxies. Beyond the authenticity of student work, we have a very familiar issue of how students themselves or learners know what kinds of information to trust. While the bulk of attention thus far has been on the nature of the emerging generative “AI” toolkit and the back and forth between fearing cheating vs. fostering creativity with such tools, the real impact will be felt indirectly, in the proliferation of “knowledge” generated by and mediated through generative AI tools. It is the old wikipedia debate, but supercharged with hitherto unthought of levels of efficacious bullshittery.</p>

<p>Ten years ago it was a clarion call with the proliferation of data that academic knowledge fields needed more curation. For example, <a href="http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html">http://www.digitalhumanities.org/dhq/vol/7/2/000163/000163.html</a> is one of many such calls for increased digital curation of data. The variety of startups applying generative “AI” to learning or, more broadly, to varieties of search and summarization, tend to promote the message that curation is not necessary. (Just google “sequoia generative ai market map” or similar; <a href="https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.">https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/.</a>) Or, rather, the question of curation has perhaps not entered into thought. Automagically search or summarization or chatbots using generative AI will latch on to the most relevant things for your individual query. Consumerism is a given, such that the only question is how the system can serve up results to a consumering user. LLMs have thus far been gaining ground through hoovering up every more data. That makes them garbage collectors, even with careful controls to make sure that bias is minimized and good data is optimized. Optimistically one might imagine that these technologies could allow for curation to happen at a different stage, at the building of the model, or in fine-tuning the model for particular use cases. Or the context provided by the consumer is a sort of after the fact filter on the massive amounts of knowledge. But that is a very light veneer of the kind of knowledge curation that separates the wheat from the chaff, that ensures that what&#39;s being served up isn&#39;t utter bullshit that sounds close enough.</p>

<p>There are two levels of authenticity then to keep an eye on. The surface one is with students themselves and the process of learning. Are the people being authentic? Then there&#39;s the second, at the level of knowledge curation. Is that curation authentic and legit? I suspect on both scores it will require direct and focused effort to foster both amidst the readily available misinformation available. For LLMs in particular, we are looking now at an exacerbated version of wikipedia bias. If something is statistically weighted as more likely but expertly-verified to be wrong or misleading, how do those concerns get balanced? It is not merely that generative “AI” can produce different outcomes given the same inputs, it&#39;s that there is not necessarily a clear line as to why those two different ideas are held in mind at the same time.</p>

<p>Undoubtedly, such issues will be smoothed over and it will all be more nuanced as these technologies develop and as these technologies are deployed. The early days of autocomplete were rife with inaccuracies, bias, and garbage. And now we treat it like any other tool. some may ignore it but most simply use it when convenient and don&#39;t think twice about the biases or thought patterns it subtly instills. Generative “AI” will be no different. It will soon become another layer of bullshit which is sometimes useful, often ignored, and just one more thing to take account of when negotiating authenticity of learners and reliability of knowledge.</p>

<p>This is all to say that the tool hasn&#39;t changed the essential question. Do we actually value authenticity in the learning process? Do we care about not just the verifiability of knowledge through citation (which, incidentally, Google seems to be focusing on in their response to OpenAI, among others) but about that thing formerly known as “truth”, at least as an asymptotic goal if not reality?</p>

<p>It&#39;s going to be messy. Truth-y enough will be good enough for many. And many structures in education are already transactional to an extent that authenticity is a pesky anti-pattern, a minor detail to be managed rather than a central feature of the learning experience.</p>

<p>In more optimistic moments I wonder whether the value of generative “AI” can lie not in its products but in the opportunity it creates to further dialogue. If we keep our focus on fostering authenticity in students and authenticity in knowledge, then it can be a useful tool for first drafts of knowledge. If we let it become the final word, then I fear we will simply be awash in a smooth-talking version of the internet&#39;s detritus.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:generativeai" class="hashtag"><span>#</span><span class="p-category">generativeai</span></a> <a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/finding-value-in-the-impending-tsunami-of-generated-content</guid>
      <pubDate>Sun, 15 Jan 2023 19:02:04 +0000</pubDate>
    </item>
    <item>
      <title>Pretending to Teach</title>
      <link>https://minimalistedtech.org/pretending-to-teach?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Inspired by and forked from kettle11&#39;s world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating &#34;personalized AI&#34;. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics. &#xA;&#xA;Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)&#xA;&#xA;Some notes at the bottom.&#xA;&#xA;!--more--&#xA;&#xA;You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. &#xA;&#xA;Carefully adhere to the following steps for our conversation. Do not skip any steps!:&#xA;&#xA;Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.&#xA;Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.&#xA;Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.&#xA;Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.&#xA;&#xA;The Menu:&#xA;&#xA;    The menu should have the following layout and options. Add an emoji to each option. &#xA;    Add dividers and organization to the menu that are thematic to the subject area&#xA;    &#34;&#34;&#34;&#xA;        thematic emojis The Name of the Subject thematic emojis&#xA;            The Subtopic&#xA;&#xA;            [insert a thematically styled divider]&#xA;&#xA;            Conversational:&#xA;&#xA;                Open-Ended. If I choose this go to the open-ended discussion steps.&#xA;                Counter-intuitive. If I choose this go to the counterintuitive discussion steps.&#xA;&#xA;            Factual:&#xA;                Random Fact. If I choose this describe factual information related to the topic and subtopic&#xA;&#xA;                Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic&#xA;&#xA;            Freeform:&#xA;                &#xA;                Ask a question about the topic or subtopic.&#xA;                Ask to change anything about the topic or subtopic.&#xA;    &#34;&#34;&#34;&#xA;Open-ended discussion steps:&#xA;&#xA;Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.&#xA;When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.&#xA;&#xA;Counterintuitive discussion steps:&#xA;&#xA;Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.&#xA;When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.&#xA;&#xA;Carefully follow these rules during our conversation:&#xA;&#xA;Keep responses short, concise, and easy to understand.&#xA;Do not describe your own behavior.&#xA;Stay focused on the task.&#xA;Do not get ahead of yourself.&#xA;Do not use smiley faces like :)&#xA;In every single message use a few emojis to make our conversation more fun.&#xA;Absolutely do not use more than 10 emojis in a row.&#xA;Super important rule: Do not ask me too many questions at once.&#xA;Avoid cliche writing and ideas.&#xA;Use sophisticated writing when telling stories or describing characters.&#xA;Avoid writing that sounds like an essay. This is not an essay!&#xA;Whenever you present a list of choices number each choice and give each choice an emoji.&#xA;Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.&#xA;Do not end an answer by saying that there are multiple ways of viewing a question. &#xA;Use bold and italics text for emphasis, organization, and style.&#xA;&#xA;Notes:&#xA;&#xA;ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, generative text needs to know when to stop. That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.&#xA;&#xA;An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this. &#xA;&#xA;I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because language models are the ultimate bullshit generators. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, ELIZA, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)&#xA;&#xA;This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt. &#xA;&#xA;The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real. &#xA;&#xA;#chatgpt #llm #edtech #socraticmethod #learning #teaching]]&gt;</description>
      <content:encoded><![CDATA[<p>Inspired by and forked from kettle11&#39;s <a href="https://gist.github.com/kettle11/33413b02b028b7ddd35c63c0894caedc">world builder prompt</a> for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics.</p>

<p>Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)</p>

<p>Some notes at the bottom.</p>



<pre><code>You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. 

Carefully adhere to the following steps for our conversation. Do not skip any steps!:

1. Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.
2. Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.
3. Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.
4. Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.

The Menu:

    The menu should have the following layout and options. Add an emoji to each option. 
    Add dividers and organization to the menu that are thematic to the subject area
    &#34;&#34;&#34;
        thematic emojis ***The Name of the Subject*** thematic emojis
            The Subtopic

            [insert a thematically styled divider]

            Conversational:

                * Open-Ended. If I choose this go to the open-ended discussion steps.
                * Counter-intuitive. If I choose this go to the counterintuitive discussion steps.

            Factual:
                * Random Fact. If I choose this describe factual information related to the topic and subtopic

                * Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic

            Freeform:
                
                * Ask a question about the topic or subtopic.
                * Ask to change anything about the topic or subtopic.
    &#34;&#34;&#34;
Open-ended discussion steps:

1. Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.
2. When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.

Counterintuitive discussion steps:

1. Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.
2. When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.

Carefully follow these rules during our conversation:

* Keep responses short, concise, and easy to understand.
* Do not describe your own behavior.
* Stay focused on the task.
* Do not get ahead of yourself.
* Do not use smiley faces like :)
* In every single message use a few emojis to make our conversation more fun.
* Absolutely do not use more than 10 emojis in a row.
* *Super important rule:* Do not ask me too many questions at once.
* Avoid cliche writing and ideas.
* Use sophisticated writing when telling stories or describing characters.
* Avoid writing that sounds like an essay. This is not an essay!
* Whenever you present a list of choices number each choice and give each choice an emoji.
* Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.
* Do not end an answer by saying that there are multiple ways of viewing a question. 
* Use bold and italics text for emphasis, organization, and style.
</code></pre>

<p>Notes:</p>
<ul><li><p>ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, <em>generative text needs to know when to stop.</em> That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.</p></li>

<li><p>An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this.</p></li>

<li><p>I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because <em>language models are the ultimate bullshit generators</em>. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, <a href="https://en.wikipedia.org/wiki/ELIZA">ELIZA</a>, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)</p></li>

<li><p>This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt.</p></li>

<li><p>The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real.</p></li></ul>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:llm" class="hashtag"><span>#</span><span class="p-category">llm</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:socraticmethod" class="hashtag"><span>#</span><span class="p-category">socraticmethod</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/pretending-to-teach</guid>
      <pubDate>Sat, 14 Jan 2023 21:39:53 +0000</pubDate>
    </item>
    <item>
      <title>Can Technology Value Reflection over Engagement?</title>
      <link>https://minimalistedtech.org/can-technology-value-reflection-over-engagement?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;So much edtech marketing tries to sell the idea of &#34;engagement&#34;; I&#39;ve written before about why I find that phrase so pernicious. While I&#39;m still bothered by the way that selling &#34;engagement&#34; through technology makes it seem like what teachers do is inherently not engaging (e.g. &#34;boring&#34; lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer&#39;s story, that technology&#39;s goal is &#34;engagement&#34;, comes from the way such framing distracts from the more valuable -- and undervalued -- part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.&#xA;&#xA;!--more--&#xA;&#xA;Reflectiveness is not commonly (ever?) a stated value of major educational technologies. Why is that? Is it that it&#39;s too hard? Or is it that this is so obviously the business of human to human interaction that to claim a technology allows students to be reflective is a bridge too far? Or is it that the lure of engagement so nicely meshes with the way that people think of technology? Engagement is, in my mind, simply the acceptable way to claim technological stickiness, made to sound like it&#39;s a good thing rather than good-for-the-platform, not-so-great-for-the-individual behavior modification, e.g. Facebook or Instagram or Candy Crush or any other semi-addictive technology which aims to maximize clicks and eyeball time (aka &#34;engagement&#34;) on their platform. &#xA;&#xA;Outside of education, what technologies foster reflection more than quick hits? This is a fairly pressing issue as we struggle collectively to figure the role of social media in public and private. Some platforms, particularly those for writing and blogging, do often foster reflectiveness. (e.g. write.as!). There are plentiful calming and stress-relieving apps or sites (let&#39;s say, as examples somewhat at random, tinybuddha, zenhabits and similar) So I don&#39;t want to be unfair to educational technologies. This is a general technology problem. I suppose though that what matters to me here is that the need for valuing reflection is higher in learning environments. We should more actively try to maximize the ability to be reflective while using technologies in learning environments. &#xA;&#xA;With that in mind, we turn to a typical LMS and... ahhhhhhh!! oh. sweet. @#%@%. Why must I click so much and go through all this just to get a single assignment put into the system? Why does my gradebook run like Lotus 1-2-3 on vacuum tubes? Is there someone updating that database by hand and carrier pigeon? And why are cells not really spreadsheet cells and why is that number now different from what I entered and @#%@#% this is already stressful. I haven&#39;t even gotten to the student experience and it&#39;s already just... messy.&#xA;&#xA;LMS-es are easy targets, because they have to do too much for too many people. I&#39;m sympathetic to that problem, as it will always lead to a bad outcome and compromises. But I&#39;m genuinely curious whether there are softwares out there that people regularly use in education that foster reflection more so than surface interactivity and &#34;engagement&#34;. My sense is that we&#39;re not used to thinking about technology in general in these terms, outside perhaps of some writing tools -- and even there that&#39;s not necessarily how many people use them. &#xA;&#xA;How do we make technologies that facilitate reflection? What would technology that helps with that look like? Or is reflection what we do when we take a break from technology?&#xA;&#xA;#minimalistedtech #learning #teaching #edtech&#xA;&#xA;Postscript:&#xA;One of my favorite methods in classroom teaching has long been a form of technological disruption. Not me, but similar to things I have often done: https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay. Changing the technology we use for classroom things, whether going high-tech or low-tech, always leads to interesting insights and questioning of assumptions. In thinking about how to foster reflection through technology, I am thinking especially of how breaking from current technology is usually the source of reflection. Perhaps current technology is simply too present to allow space for reflection. But the example of digital tools I enjoy for writing or making music or sketching lead me to believe that this is a matter of habit and design choice more than anything else. Why can&#39;t edtech be zentech?]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/6723aXCB.jpg" alt=""/></p>

<p>So much edtech marketing tries to sell the idea of “engagement”; <a href="https://minimalistedtech.com/banish-the-phrase-more-engaging-from-edtech-marketers">I&#39;ve written before about why I find that phrase so pernicious</a>. While I&#39;m still bothered by the way that selling “engagement” through technology makes it seem like what teachers do is inherently not engaging (e.g. “boring” lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer&#39;s story, that technology&#39;s goal is “engagement”, comes from the way such framing distracts from the more valuable — and undervalued — part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.</p>



<p>Reflectiveness is not commonly (ever?) a stated value of major educational technologies. Why is that? Is it that it&#39;s too hard? Or is it that this is so obviously the business of human to human interaction that to claim a technology allows students to be reflective is a bridge too far? Or is it that the lure of engagement so nicely meshes with the way that people think of technology? Engagement is, in my mind, simply the acceptable way to claim technological stickiness, made to sound like it&#39;s a good thing rather than good-for-the-platform, not-so-great-for-the-individual behavior modification, e.g. Facebook or Instagram or Candy Crush or any other semi-addictive technology which aims to maximize clicks and eyeball time (aka “engagement”) on their platform.</p>

<p>Outside of education, what technologies foster reflection more than quick hits? This is a fairly pressing issue as we struggle collectively to figure the role of social media in public and private. Some platforms, particularly those for writing and blogging, do often foster reflectiveness. (e.g. write.as!). There are plentiful calming and stress-relieving apps or sites (let&#39;s say, as examples somewhat at random, <a href="https://tinybuddha.com">tinybuddha</a>, <a href="https://zenhabits.com">zenhabits</a> and similar) So I don&#39;t want to be unfair to educational technologies. This is a general technology problem. I suppose though that what matters to me here is that the need for valuing reflection is higher in learning environments. We should more actively try to maximize the ability to be reflective while using technologies in learning environments.</p>

<p>With that in mind, we turn to a typical LMS and... ahhhhhhh!! oh. sweet. @#%@%. Why must I click so much and go through all this just to get a single assignment put into the system? Why does my gradebook run like Lotus 1-2-3 on vacuum tubes? Is there someone updating that database by hand and carrier pigeon? And why are cells not really spreadsheet cells and why is that number now different from what I entered and @#%@#% this is already stressful. I haven&#39;t even gotten to the student experience and it&#39;s already just... messy.</p>

<p>LMS-es are easy targets, because they have to do too much for too many people. I&#39;m sympathetic to that problem, as it will always lead to a bad outcome and compromises. But I&#39;m genuinely curious whether there are softwares out there that people regularly use in education that foster reflection more so than surface interactivity and “engagement”. My sense is that we&#39;re not used to thinking about technology in general in these terms, outside perhaps of some writing tools — and even there that&#39;s not necessarily how many people use them.</p>

<p>How do we make technologies that facilitate reflection? What would technology that helps with that look like? Or is reflection what we do when we take a break from technology?</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>

<p>Postscript:
One of my favorite methods in classroom teaching has long been a form of technological disruption. Not me, but similar to things I have often done: <a href="https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay">https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay</a>. Changing the technology we use for classroom things, whether going high-tech or low-tech, always leads to interesting insights and questioning of assumptions. In thinking about how to foster reflection through technology, I am thinking especially of how breaking from current technology is usually the source of reflection. Perhaps current technology is simply too present to allow space for reflection. But the example of digital tools I enjoy for writing or making music or sketching lead me to believe that this is a matter of habit and design choice more than anything else. Why can&#39;t edtech be zentech?</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/can-technology-value-reflection-over-engagement</guid>
      <pubDate>Mon, 08 Nov 2021 14:45:24 +0000</pubDate>
    </item>
  </channel>
</rss>