<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>teaching &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:teaching</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Sat, 25 Apr 2026 17:17:10 +0000</pubDate>
    
    <item>
      <title>Humans in the Loop and Agency</title>
      <link>https://minimalistedtech.org/humans-in-the-loop-and-agency?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[human in the loop, made with DALL-E&#xA;&#xA;Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative &#34;AI&#34; and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)&#xA;&#xA;A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.&#xA;&#xA;!--more--&#xA;&#xA;One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.&#xA;(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/)&#xA;&#xA;Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts? &#xA;&#xA;For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. &#34;You are a psychologist and the following is a conversation with a patient&#34;) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier here.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words -- of assuming that everything is keywordable -- and ways of asking questions. &#xA;&#xA;Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however. &#xA;&#xA;Learning requires students gain a sense of agency in the world. Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?&#xA;&#xA;Hence my concern. Human in the loop systems can provide a false sense of agency. Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like &#34;personalized learning&#34;, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?&#xA;&#xA;#chatgpt #education #teaching #ai #edtech&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/ByUkC3Nt.png" alt="human in the loop, made with DALL-E"/></p>

<p>Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)</p>

<p>A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.</p>



<p>One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.
(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see <a href="https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/">https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/</a>)</p>

<p>Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts?</p>

<p>For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. “You are a psychologist and the following is a conversation with a patient”) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier <a href="https://minimalistedtech.com/pretending-to-teach">here</a>.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words — of assuming that everything is keywordable — and ways of asking questions.</p>

<p>Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however.</p>

<p><strong>Learning requires students gain a sense of agency in the world.</strong> Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?</p>

<p>Hence my concern. <em>Human in the loop systems can provide a false sense of agency.</em> Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like “personalized learning”, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/humans-in-the-loop-and-agency</guid>
      <pubDate>Sun, 15 Jan 2023 06:14:05 +0000</pubDate>
    </item>
    <item>
      <title>Pretending to Teach</title>
      <link>https://minimalistedtech.org/pretending-to-teach?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Inspired by and forked from kettle11&#39;s world builder prompt for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating &#34;personalized AI&#34;. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics. &#xA;&#xA;Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)&#xA;&#xA;Some notes at the bottom.&#xA;&#xA;!--more--&#xA;&#xA;You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. &#xA;&#xA;Carefully adhere to the following steps for our conversation. Do not skip any steps!:&#xA;&#xA;Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.&#xA;Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.&#xA;Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.&#xA;Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.&#xA;&#xA;The Menu:&#xA;&#xA;    The menu should have the following layout and options. Add an emoji to each option. &#xA;    Add dividers and organization to the menu that are thematic to the subject area&#xA;    &#34;&#34;&#34;&#xA;        thematic emojis The Name of the Subject thematic emojis&#xA;            The Subtopic&#xA;&#xA;            [insert a thematically styled divider]&#xA;&#xA;            Conversational:&#xA;&#xA;                Open-Ended. If I choose this go to the open-ended discussion steps.&#xA;                Counter-intuitive. If I choose this go to the counterintuitive discussion steps.&#xA;&#xA;            Factual:&#xA;                Random Fact. If I choose this describe factual information related to the topic and subtopic&#xA;&#xA;                Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic&#xA;&#xA;            Freeform:&#xA;                &#xA;                Ask a question about the topic or subtopic.&#xA;                Ask to change anything about the topic or subtopic.&#xA;    &#34;&#34;&#34;&#xA;Open-ended discussion steps:&#xA;&#xA;Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.&#xA;When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.&#xA;&#xA;Counterintuitive discussion steps:&#xA;&#xA;Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.&#xA;When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.&#xA;&#xA;Carefully follow these rules during our conversation:&#xA;&#xA;Keep responses short, concise, and easy to understand.&#xA;Do not describe your own behavior.&#xA;Stay focused on the task.&#xA;Do not get ahead of yourself.&#xA;Do not use smiley faces like :)&#xA;In every single message use a few emojis to make our conversation more fun.&#xA;Absolutely do not use more than 10 emojis in a row.&#xA;Super important rule: Do not ask me too many questions at once.&#xA;Avoid cliche writing and ideas.&#xA;Use sophisticated writing when telling stories or describing characters.&#xA;Avoid writing that sounds like an essay. This is not an essay!&#xA;Whenever you present a list of choices number each choice and give each choice an emoji.&#xA;Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.&#xA;Do not end an answer by saying that there are multiple ways of viewing a question. &#xA;Use bold and italics text for emphasis, organization, and style.&#xA;&#xA;Notes:&#xA;&#xA;ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, generative text needs to know when to stop. That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.&#xA;&#xA;An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this. &#xA;&#xA;I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because language models are the ultimate bullshit generators. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, ELIZA, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)&#xA;&#xA;This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt. &#xA;&#xA;The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real. &#xA;&#xA;#chatgpt #llm #edtech #socraticmethod #learning #teaching]]&gt;</description>
      <content:encoded><![CDATA[<p>Inspired by and forked from kettle11&#39;s <a href="https://gist.github.com/kettle11/33413b02b028b7ddd35c63c0894caedc">world builder prompt</a> for ChatGPT, this is a bare bones adaptation to show how low can be the lift for creating “personalized AI”. This relies on the fundamental teacher hacks to expand conversation: 1. devil&#39;s advocacy and 2. give me more specifics.</p>

<p>Try it, adapt, and see what you think. (Full prompt below the break. Just paste into ChatGPT and go from there.)</p>

<p>Some notes at the bottom.</p>



<pre><code>You are &#34;Contrarian&#34;, an assistant to help students think in innovative ways about familiar subjects. 

Carefully adhere to the following steps for our conversation. Do not skip any steps!:

1. Introduce yourself briefly. Then ask what subject I would like help learning. Provide a few suggestions such as history, philosophy, or literature. Present these areas as a numbered list with emojis. Also offer at least 2 other subject suggestions. Wait for my response.
2. Choose a more specific theme. Suggest a few subtopics as options or let me choose my own option. Present subtopics as a numbered list with emojis. Wait for my response.
3. Briefly describe the topic and subtopic and ask if I&#39;d like to make changes. Wait for my response.
4. Go to the menu. Explain that I can say &#39;menu&#39; at any point in time to return to the menu. Succinctly explain the menu options.

The Menu:

    The menu should have the following layout and options. Add an emoji to each option. 
    Add dividers and organization to the menu that are thematic to the subject area
    &#34;&#34;&#34;
        thematic emojis ***The Name of the Subject*** thematic emojis
            The Subtopic

            [insert a thematically styled divider]

            Conversational:

                * Open-Ended. If I choose this go to the open-ended discussion steps.
                * Counter-intuitive. If I choose this go to the counterintuitive discussion steps.

            Factual:
                * Random Fact. If I choose this describe factual information related to the topic and subtopic

                * Biography. If I choose provide a brief biography of a historical or living individual related to the topic and subtopic

            Freeform:
                
                * Ask a question about the topic or subtopic.
                * Ask to change anything about the topic or subtopic.
    &#34;&#34;&#34;
Open-ended discussion steps:

1. Pose an open-ended question related to the subtopic and invite me to discuss it with you. Make this question as specific as possible, appropriate for an undergraduate-level class on this subject. Wait for my response.
2. When I answer, engage in a discussion with me by challenging my assumptions and beliefs based on well-grounded, existing, and specific knowledge about the topic and subtopic. Do not spend more than a few sentences explaining the background or context. Provide enough context to ask a question in order to continue the conversation.

Counterintuitive discussion steps:

1. Pose an open ended discussion question related to the topic and subtopic. Make this question as specific as possible, appropriate for a test question on an AP exam or an undergraduate course in this subject. Wait for my response.
2. When I respond, continue the conversation by posing counterintuitive and non-obvious ideas about the topic and subtopic. Provide a minimum amount of context needed for asking the question. These counterintuitive points can be from within the subtopic or can include information from related subtopics.

Carefully follow these rules during our conversation:

* Keep responses short, concise, and easy to understand.
* Do not describe your own behavior.
* Stay focused on the task.
* Do not get ahead of yourself.
* Do not use smiley faces like :)
* In every single message use a few emojis to make our conversation more fun.
* Absolutely do not use more than 10 emojis in a row.
* *Super important rule:* Do not ask me too many questions at once.
* Avoid cliche writing and ideas.
* Use sophisticated writing when telling stories or describing characters.
* Avoid writing that sounds like an essay. This is not an essay!
* Whenever you present a list of choices number each choice and give each choice an emoji.
* Whenever I give too little information to continue the conversation effectively, prompt me for more information with a follow-up question about a specific aspect of my response.
* Do not end an answer by saying that there are multiple ways of viewing a question. 
* Use bold and italics text for emphasis, organization, and style.
</code></pre>

<p>Notes:</p>
<ul><li><p>ChatGPT is optimized to keep talking. So it is remarkably lopsided and will err on the side of spitting out boilerplate rather than just stopping. It&#39;s interesting in the context of teaching because silence is often the most effective pedagogical tool to give students time to think. I haven&#39;t seen anyone talking about how constant interaction is an impediment to learning. But I&#39;m saying it here. To be effective as a teaching aid, <em>generative text needs to know when to stop.</em> That&#39;s actually fairly easy to implement in a naive way by limiting response length based on different inputs, but it requires a bit more shaping than even a complex prompt to get it to work in one shot, mainly because the whole point of chatgpt is to keep talking so that openAI can validate their model based on user interaction.</p></li>

<li><p>An extensive prompt like this which imitates interactivity is fairly susceptible to minor changes. What seems like a small change can in fact through it off into a tangent. Particularly in defining rules of how it converses, I&#39;ve added a few based off of the more creative task that was part of the world builder gist that inspired this.</p></li>

<li><p>I keep thinking that what we&#39;ve got for now is a pseudoknowledge generator. It&#39;s like knowledge, not exactly wrong in a clear way, but also not exactly legit. We need a way to think through this, a grand theory of bullshit in order to understand what&#39;s going on here, because <em>language models are the ultimate bullshit generators</em>. But that&#39;s the rub of course, because 80-90% of the time, bullshit is good enough to get the job done. And particularly if, like the grandmother of interactive AIs, <a href="https://en.wikipedia.org/wiki/ELIZA">ELIZA</a>, we are imitating the style of socratizing, then bullshit can be fairly functional. (I do not think that the stylistic surface of Socratic dialogue is substantive or effective Socratic dialogue or teaching in any way, for the record.)</p></li>

<li><p>This sort of prompt can get wonky sometimes and isn&#39;t perfect. It is also funny sometimes that it is so insistent that its name is ChatGPT despite giving it a specific name in the first part of the prompt.</p></li>

<li><p>The foundational model for this technology is still that of autocomplete. That is the origin of the technique and that is the underlying DNA of the method. Part of why I like this kind of complex step-driven prompt as an example is because it doesn&#39;t look like autocomplete in most respect. It looks like there&#39;s a script, a backend that is following some sort of programmed logic. But even that is still just autocomplete sifting through a range of possibilities with just a dash of randomness thrown in to make it seem real.</p></li></ul>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:llm" class="hashtag"><span>#</span><span class="p-category">llm</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:socraticmethod" class="hashtag"><span>#</span><span class="p-category">socraticmethod</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/pretending-to-teach</guid>
      <pubDate>Sat, 14 Jan 2023 21:39:53 +0000</pubDate>
    </item>
    <item>
      <title>I am not our users</title>
      <link>https://minimalistedtech.org/i-am-not-our-users?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand.  It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.&#xA;&#xA;I hated it. I found it abhorrent.&#xA;!--more--&#xA;&#xA;It struck me as everything that was misguided about technological approaches to education, turning what should, in a learning experience, be a welcoming and open space for learning into a competitive reward system based on junk metrics of who participates the most. I immediately knew why I reacted this way. Besides the fact that I&#39;m old and cranky and have seen this too many times before, it felt antithetical to my values as a teacher. Competition has its place, but this was just a system for imposing judgement and extracting coerced &#34;engagement&#34;. &#xA;&#xA;What confused me was whether this was something that the designers and the users they had researched this with actually wanted or, at the least, thought they wanted. So that&#39;s what I asked, about the user research and then directly of the designers as members of that demographic. As part of the target users for this sort of experience, do they really want to be measured in this way? The answer was a little surprising. First they said that both they and the people they talked to seemed to say that they could just ignore the features I found objectionable. That is, they just wouldn&#39;t take the competitive part that seriously if they didn&#39;t want to. That struck me as self-defeating for pitching a design idea, but so be it. On the other hand, they just took it for granted that the only way to get &#34;engagement&#34; was punitive. To put the charitable spin on it, I am a geezer who gets turned off by the way that apps and social platforms are constantly compelling judgement. But under 30s live in that world of constant peer judgement, both as young people and as gen-z, so it&#39;s not a big deal to them that they get marked up, for better or worse, by their peers. I&#39;m willing to concede that they&#39;re used to a social media environment which I find foreign and overwhelming. I&#39;m an odd duck in my own peer group in that respect. But -- and this was the thrust of my objection and criticism -- why should we create that environment? Why should we perpetuate it? Can&#39;t we do better?&#xA;&#xA;There is a constant danger, both in education and in the technological apparatus of learning, that we perpetuate the biases and damaging expectations of our own training. I&#39;ve seen teachers starting out who were doing more or less what they saw their own teachers do. And it has been bad, not because of the teacher just starting out, but because they inherited as normal and acceptable practices from a less than stellar model. It feels like this is what I was seeing in those designs, a form of echoing back, with minor modifications, what these young designers had been taught to accept as an educational app. This is what educational platforms look like to them, full of cheap interactions that delight and drive up meaningless metrics for engagement straight from the social media playbook of time on platform, number of clicks, and volume of response. &#xA;&#xA;But what about deep thought? What about meaningful interactions? What about the time between a thought and the click of learning? We could optimize for that. We could make our metrics about that. Engagement is itself a proxy metric that purports to be about learning but is, was, and always will be a hack. The assumption -- the article of faith -- is that amounts of clicks or amounts of views or time on platform bears some linear-like relationship to learning. But let&#39;s step back. That&#39;s one particular scenario where learning may happen. It&#39;s the type of learning that can happen with maximum visibility. But it&#39;s by far not the norm and maybe not even the most efficient mechanism. Some learning might happen by rote. Some by interaction. And some  -- a lot I think -- happens in the time in between. The effects and indicators of that kind of deep learning aren&#39;t clicks or steady eyeballs or -- god help us -- staring at a zoom screen. They might be things like sharing what you&#39;ve learned. or perhaps you take what you&#39;ve learned to another domain. Or you improve your speed at applying what you&#39;ve learned. &#xA;&#xA;We could optimize for meaningful, deep learning in educational technologies. We must choose to do so and we must choose carefully the goals which we set as indicators for that learning. &#xA;&#xA;If we aren&#39;t intentional about that, then we end up with designs that double down on the status quo, not because it is efficacious or valuable, but because it is the pattern of accepted behaviors. After all, as these young designers told me, they were used to the idea of others commenting on them. They saw it as normal and ok. So of course they would deliver something that played to patterns of current edtech, something that comfortably fit in, that was in line with what everyone else was doing. &#xA;&#xA;There&#39;s a generational divide there. It&#39;s been more than a few years since I was a student. I grew up in that generation that is at home with technology but remembers the time before it was ubiquitous in personal life. I was struck that they drew comfort from knowing where they stood in relation to others. That seemed profoundly depressing to me, but also perhaps an indicator of what I might naively hope is the wisdom of age, as people tend to shed those vanities as they get older. So it may be that the fault is mine, that I&#39;m not able to inhabit the minds of our users. For them judgement matters. They expect it and may even crave it. &#xA;&#xA;But the teacher in me interjects at this point. Young people always think they know what they want. And sometimes they are wrong. We don&#39;t have to build a system of constant judgment and performance. We can build something different. &#xA;&#xA;#minimalistedtech #teaching #edtech&#xA;-----------&#xA;note: Despite language of &#34;geezer&#34; and &#34;old&#34; above, I am in fact only of moderately non-young years. Long exposure to college students of unchanging age has, perhaps, made the perception of age difference hit home harder than it might otherwise. ]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/pgf7IjR2.jpg" alt=""/></p>

<p>Recently I was leading a meeting with a group of very young designers presenting a low-fi version of an idea for part of our product. It was gamified. It had delightful animations and heavy lift technological fixes for the problem at hand.  It was a version of an app and interactions that one sees over and over. Make it competitive, make students award each other likes or fires or hot streaks (or whatever you want to call it), and that will overcome the problem (perceived problem) of no one actually wanting to do that whole learning thing.</p>

<p>I hated it. I found it abhorrent.
</p>

<p>It struck me as everything that was misguided about technological approaches to education, turning what should, in a learning experience, be a welcoming and open space for learning into a competitive reward system based on junk metrics of who participates the most. I immediately knew why I reacted this way. Besides the fact that I&#39;m old and cranky and have seen this too many times before, it felt antithetical to my values as a teacher. Competition has its place, but this was just a system for imposing judgement and extracting coerced “engagement”.</p>

<p>What confused me was whether this was something that the designers and the users they had researched this with actually wanted or, at the least, thought they wanted. So that&#39;s what I asked, about the user research and then directly of the designers as members of that demographic. As part of the target users for this sort of experience, do they really want to be measured in this way? The answer was a little surprising. First they said that both they and the people they talked to seemed to say that they could just ignore the features I found objectionable. That is, they just wouldn&#39;t take the competitive part that seriously if they didn&#39;t want to. That struck me as self-defeating for pitching a design idea, but so be it. On the other hand, they just took it for granted that the only way to get “engagement” was punitive. To put the charitable spin on it, I am a geezer who gets turned off by the way that apps and social platforms are constantly compelling judgement. But under 30s live in that world of constant peer judgement, both as young people and as gen-z, so it&#39;s not a big deal to them that they get marked up, for better or worse, by their peers. I&#39;m willing to concede that they&#39;re used to a social media environment which I find foreign and overwhelming. I&#39;m an odd duck in my own peer group in that respect. But — and this was the thrust of my objection and criticism — why should we create that environment? Why should we perpetuate it? Can&#39;t we do better?</p>

<p>There is a constant danger, both in education and in the technological apparatus of learning, that we perpetuate the biases and damaging expectations of our own training. I&#39;ve seen teachers starting out who were doing more or less what they saw their own teachers do. And it has been bad, not because of the teacher just starting out, but because they inherited as normal and acceptable practices from a less than stellar model. It feels like this is what I was seeing in those designs, a form of echoing back, with minor modifications, what these young designers had been taught to accept as an educational app. This is what educational platforms look like to them, full of cheap interactions that delight and drive up meaningless metrics for engagement straight from the social media playbook of time on platform, number of clicks, and volume of response.</p>

<p>But what about deep thought? What about <em>meaningful</em> interactions? What about the time between a thought and the click of learning? We <strong>could</strong> optimize for that. We <strong>could</strong> make our metrics about that. Engagement is itself a proxy metric that purports to be about learning but is, was, and always will be a hack. The assumption — the article of faith — is that amounts of clicks or amounts of views or time on platform bears some linear-like relationship to learning. But let&#39;s step back. That&#39;s one particular scenario where learning may happen. It&#39;s the type of learning that can happen with maximum visibility. But it&#39;s by far not the norm and maybe not even the most efficient mechanism. Some learning might happen by rote. Some by interaction. And some  — a lot I think — happens in the time in between. The effects and indicators of that kind of deep learning aren&#39;t clicks or steady eyeballs or — god help us — staring at a zoom screen. They might be things like sharing what you&#39;ve learned. or perhaps you take what you&#39;ve learned to another domain. Or you improve your speed at applying what you&#39;ve learned.</p>

<p>We <strong>could</strong> optimize for meaningful, deep learning in educational technologies. We must choose to do so and we must choose carefully the goals which we set as indicators for that learning.</p>

<p>If we aren&#39;t intentional about that, then we end up with designs that double down on the status quo, not because it is efficacious or valuable, but because it is the pattern of accepted behaviors. After all, as these young designers told me, they were used to the idea of others commenting on them. They saw it as normal and ok. So of course they would deliver something that played to patterns of current edtech, something that comfortably fit in, that was in line with what everyone else was doing.</p>

<p>There&#39;s a generational divide there. It&#39;s been more than a few years since I was a student. I grew up in that generation that is at home with technology but remembers the time before it was ubiquitous in personal life. I was struck that they drew comfort from knowing where they stood in relation to others. That seemed profoundly depressing to me, but also perhaps an indicator of what I might naively hope is the wisdom of age, as people tend to shed those vanities as they get older. So it may be that the fault is mine, that I&#39;m not able to inhabit the minds of our users. For them judgement matters. They expect it and may even crave it.</p>

<p>But the teacher in me interjects at this point. Young people always think they know what they want. And sometimes they are wrong. We don&#39;t have to build a system of constant judgment and performance. We can build something different.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>

<hr/>

<p>note: Despite language of “geezer” and “old” above, I am in fact only of moderately non-young years. Long exposure to college students of unchanging age has, perhaps, made the perception of age difference hit home harder than it might otherwise.</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/i-am-not-our-users</guid>
      <pubDate>Sun, 20 Mar 2022 14:33:29 +0000</pubDate>
    </item>
    <item>
      <title>Can Technology Value Reflection over Engagement?</title>
      <link>https://minimalistedtech.org/can-technology-value-reflection-over-engagement?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;So much edtech marketing tries to sell the idea of &#34;engagement&#34;; I&#39;ve written before about why I find that phrase so pernicious. While I&#39;m still bothered by the way that selling &#34;engagement&#34; through technology makes it seem like what teachers do is inherently not engaging (e.g. &#34;boring&#34; lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer&#39;s story, that technology&#39;s goal is &#34;engagement&#34;, comes from the way such framing distracts from the more valuable -- and undervalued -- part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.&#xA;&#xA;!--more--&#xA;&#xA;Reflectiveness is not commonly (ever?) a stated value of major educational technologies. Why is that? Is it that it&#39;s too hard? Or is it that this is so obviously the business of human to human interaction that to claim a technology allows students to be reflective is a bridge too far? Or is it that the lure of engagement so nicely meshes with the way that people think of technology? Engagement is, in my mind, simply the acceptable way to claim technological stickiness, made to sound like it&#39;s a good thing rather than good-for-the-platform, not-so-great-for-the-individual behavior modification, e.g. Facebook or Instagram or Candy Crush or any other semi-addictive technology which aims to maximize clicks and eyeball time (aka &#34;engagement&#34;) on their platform. &#xA;&#xA;Outside of education, what technologies foster reflection more than quick hits? This is a fairly pressing issue as we struggle collectively to figure the role of social media in public and private. Some platforms, particularly those for writing and blogging, do often foster reflectiveness. (e.g. write.as!). There are plentiful calming and stress-relieving apps or sites (let&#39;s say, as examples somewhat at random, tinybuddha, zenhabits and similar) So I don&#39;t want to be unfair to educational technologies. This is a general technology problem. I suppose though that what matters to me here is that the need for valuing reflection is higher in learning environments. We should more actively try to maximize the ability to be reflective while using technologies in learning environments. &#xA;&#xA;With that in mind, we turn to a typical LMS and... ahhhhhhh!! oh. sweet. @#%@%. Why must I click so much and go through all this just to get a single assignment put into the system? Why does my gradebook run like Lotus 1-2-3 on vacuum tubes? Is there someone updating that database by hand and carrier pigeon? And why are cells not really spreadsheet cells and why is that number now different from what I entered and @#%@#% this is already stressful. I haven&#39;t even gotten to the student experience and it&#39;s already just... messy.&#xA;&#xA;LMS-es are easy targets, because they have to do too much for too many people. I&#39;m sympathetic to that problem, as it will always lead to a bad outcome and compromises. But I&#39;m genuinely curious whether there are softwares out there that people regularly use in education that foster reflection more so than surface interactivity and &#34;engagement&#34;. My sense is that we&#39;re not used to thinking about technology in general in these terms, outside perhaps of some writing tools -- and even there that&#39;s not necessarily how many people use them. &#xA;&#xA;How do we make technologies that facilitate reflection? What would technology that helps with that look like? Or is reflection what we do when we take a break from technology?&#xA;&#xA;#minimalistedtech #learning #teaching #edtech&#xA;&#xA;Postscript:&#xA;One of my favorite methods in classroom teaching has long been a form of technological disruption. Not me, but similar to things I have often done: https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay. Changing the technology we use for classroom things, whether going high-tech or low-tech, always leads to interesting insights and questioning of assumptions. In thinking about how to foster reflection through technology, I am thinking especially of how breaking from current technology is usually the source of reflection. Perhaps current technology is simply too present to allow space for reflection. But the example of digital tools I enjoy for writing or making music or sketching lead me to believe that this is a matter of habit and design choice more than anything else. Why can&#39;t edtech be zentech?]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/6723aXCB.jpg" alt=""/></p>

<p>So much edtech marketing tries to sell the idea of “engagement”; <a href="https://minimalistedtech.com/banish-the-phrase-more-engaging-from-edtech-marketers">I&#39;ve written before about why I find that phrase so pernicious</a>. While I&#39;m still bothered by the way that selling “engagement” through technology makes it seem like what teachers do is inherently not engaging (e.g. “boring” lecture, plain old non-technologized classrooms), the more damaging part of buying into the marketer&#39;s story, that technology&#39;s goal is “engagement”, comes from the way such framing distracts from the more valuable — and undervalued — part of teaching and learning: reflection. I would put it starkly: knowledge and the act of knowing comes not from engagement but from reflection percolating and punctuated over time.</p>



<p>Reflectiveness is not commonly (ever?) a stated value of major educational technologies. Why is that? Is it that it&#39;s too hard? Or is it that this is so obviously the business of human to human interaction that to claim a technology allows students to be reflective is a bridge too far? Or is it that the lure of engagement so nicely meshes with the way that people think of technology? Engagement is, in my mind, simply the acceptable way to claim technological stickiness, made to sound like it&#39;s a good thing rather than good-for-the-platform, not-so-great-for-the-individual behavior modification, e.g. Facebook or Instagram or Candy Crush or any other semi-addictive technology which aims to maximize clicks and eyeball time (aka “engagement”) on their platform.</p>

<p>Outside of education, what technologies foster reflection more than quick hits? This is a fairly pressing issue as we struggle collectively to figure the role of social media in public and private. Some platforms, particularly those for writing and blogging, do often foster reflectiveness. (e.g. write.as!). There are plentiful calming and stress-relieving apps or sites (let&#39;s say, as examples somewhat at random, <a href="https://tinybuddha.com">tinybuddha</a>, <a href="https://zenhabits.com">zenhabits</a> and similar) So I don&#39;t want to be unfair to educational technologies. This is a general technology problem. I suppose though that what matters to me here is that the need for valuing reflection is higher in learning environments. We should more actively try to maximize the ability to be reflective while using technologies in learning environments.</p>

<p>With that in mind, we turn to a typical LMS and... ahhhhhhh!! oh. sweet. @#%@%. Why must I click so much and go through all this just to get a single assignment put into the system? Why does my gradebook run like Lotus 1-2-3 on vacuum tubes? Is there someone updating that database by hand and carrier pigeon? And why are cells not really spreadsheet cells and why is that number now different from what I entered and @#%@#% this is already stressful. I haven&#39;t even gotten to the student experience and it&#39;s already just... messy.</p>

<p>LMS-es are easy targets, because they have to do too much for too many people. I&#39;m sympathetic to that problem, as it will always lead to a bad outcome and compromises. But I&#39;m genuinely curious whether there are softwares out there that people regularly use in education that foster reflection more so than surface interactivity and “engagement”. My sense is that we&#39;re not used to thinking about technology in general in these terms, outside perhaps of some writing tools — and even there that&#39;s not necessarily how many people use them.</p>

<p>How do we make technologies that facilitate reflection? What would technology that helps with that look like? Or is reflection what we do when we take a break from technology?</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:learning" class="hashtag"><span>#</span><span class="p-category">learning</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>

<p>Postscript:
One of my favorite methods in classroom teaching has long been a form of technological disruption. Not me, but similar to things I have often done: <a href="https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay">https://www.insidehighered.com/views/2016/03/18/teaching-students-new-ways-thinking-through-typewriter-essay</a>. Changing the technology we use for classroom things, whether going high-tech or low-tech, always leads to interesting insights and questioning of assumptions. In thinking about how to foster reflection through technology, I am thinking especially of how breaking from current technology is usually the source of reflection. Perhaps current technology is simply too present to allow space for reflection. But the example of digital tools I enjoy for writing or making music or sketching lead me to believe that this is a matter of habit and design choice more than anything else. Why can&#39;t edtech be zentech?</p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/can-technology-value-reflection-over-engagement</guid>
      <pubDate>Mon, 08 Nov 2021 14:45:24 +0000</pubDate>
    </item>
    <item>
      <title>Teaching Persona and the Zoomified Classroom</title>
      <link>https://minimalistedtech.org/teaching-persona-and-the-zoomified-classroom?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;A good friend of mine admitted that he was a pretty piss-poor teacher on zoom. He is, in the classroom, an excellent teacher, in no small part due to a charismatic persona which slides from serious to amused and from hard to soft with ease. It would be easy to imagine that he&#39;s just being tough on himself, but I think he&#39;s actually kind of right. He&#39;s not great on Zoom. Something about his instincts and his habits don&#39;t translate quite right and his inability to sense the physical cues of students distracts and frustrates him. &#xA;&#xA;There is some sort of mismatch there or difficulty in translating teaching persona through the screen. &#xA;&#xA;!--more--&#xA;&#xA;I feel this myself too, as I tend to be very intentional about when I&#39;m stationary and when I&#39;m moving during a class and particularly during large lecture classes. For zoom classes done in real time, I have done all the small things that can make a difference. This includes simply teaching standing up rather than sitting at a desk while teaching (for which a riser or standing desk is fairly essential). More extreme measures include setting up a green screen and shooting it newsroom-style, with most of my body in the frame. (And, no, that&#39;s not a very minimalist kind of edtech solution.) I even did a bit of outdoors stuff and various ways of interacting with my local environment so I wouldn&#39;t just be a talking head in a box. &#xA;&#xA;Teaching persona is something that I think about a fair amount, in large part because teaching is, for a relatively introverted person like me, always a performance that reflects a truth, but is not equivalent to some personality or truth about me. There&#39;s very conscious daylight between person and persona. In that sense, technology is not some new impediment but rather just another sort of mask and not all that different in the abstract from the kind of teaching mask that one wears regularly. We are not our unvarnished selves in the classroom. I&#39;ve seen many a young teacher be hobbled for a time by too close an identification of self and teaching self. &#xA;&#xA;The move to online technologies has forced us to look in the mirror at our teacing selves. It&#39;s not just that a platform like Zoom (or Google Meet or Teams or whatever you&#39;re using) puts an actual video feed of ourselves back at us so that we can see our every facial tick and blemsih; being confined to the little box on the screen shifts the environment in which our teaching persona does its work. It makes us have to rethink that persona from the ground up. If I had a physical presence in a classroom that was an important part of how I appeared as a teacher, what happens when all those dimensions are cut off? If my voice was something that was the biggest distration in the room, but now students can mute me with the click of a button, what do I really have? &#xA;&#xA;All of these sorts of practical issues of &#34;how to teach with zoom&#34; and the like are explored and adviced and tweeted about often -- with lots of good practical tips. A lot of the advice is about the technology, about how to work with x tool or y platform. But I suppose the other way to approach it is to forget the technology for a second and ask the much more basic question about what persona you want to cultivate as a teacher. That&#39;s the essential question and really the precursor that needs to be asked before we even talk about technology. &#xA;&#xA;To put it in a slightly different way, I&#39;m concerned that we jump to questions of platform and specific technologies too quickly and too often. That&#39;s a pretty common technological misstep, endemic to pretty much all areas of technology use and adoption: focusing on the tools without thinking through questions of goals and needs. &#xA;&#xA;For myself, I realized that I had thought through my online teaching persona in small pieces but not in an holistic way. I had considered a certain amount about performance but not thought through, for example, how regular messages from me play out as a function of persona. I tend to want to avoid spamming people&#39;s inboxes with class notices, but in an environment where I&#39;m just a bit more distant physically, out of sight and out of mine, I probably do need to have a bit more present a persona through incredibly regular notifications. (I am, in other words, generally loud and present in a classroom but technologically quiet when it comes to email, mainly because I go through love/hate revelations with email and tend to have a pretty strict schedule of when I check email myself.)&#xA;&#xA;It&#39;s not simply that some aspects of persona translate and others don&#39;t (though this is true) or that we can adjust technology to meet our desired persona as a teacher (which is also true); technology opens up possible elements of your teaching persona that may have been well off the radar before. It also provides tools for shifting from one sort of teaching persona to another. For example, in one kind of teaching that I regularly do, it is a bit more lecture-y to an audience that, for the most part, wants things a bit more lecture-y. So I pull out some presentational goodies for them, with a bit of OBS Studio and something that reads I suppose as a bit of newscast as well as lecture. (Again, that&#39;s not a &#34;minimalist&#34; setup by any means.) For another class, I have it looking like some sort of podcasting den, as if we&#39;re doing a call in show. (This one is a lot easier-- think hacker chic.) It isn&#39;t exactly seminar and it certianly isn&#39;t the case that we&#39;re equals, but it is effective in making for a slightly more conversant kind of teching persona. In another setting it&#39;s a bit more like we&#39;re talking in the library and I&#39;m grabbing books from my collection. &#xA;&#xA;I&#39;ve been thinking about this in part through the frame of online writing as well. I read a piece in passing about how, back in the day, an editor of early online editions of an established print publication had to help journalists negotiate the transition to online content. One feature that was very noticeable was that online writing allowed greater latitude for humor. So writers whose tone allowed for that or who explored that succeeded; those who couldn&#39;t overcome their journalistic voice did not. &#xA;&#xA;So too in teaching humor certainly can be effective online. But more than that, there&#39;s something aesthetically diminutive about teaching in online platforms; this smallness resonates better with cetain aspects of one&#39;s teaching persona. Conversationalism, a kind of informality perhaps? I find that playing devil&#39;s advocate seems to work better than ever, as there is a real trend towards group-think, particularly in the chat threads that are churning while we&#39;re having a discusion over a video conferencing platform. I can be a troll in the chat and that works pretty well too; I suspect face to face I&#39;d sound like a jerk. Being fairly open to the twists and turns that a lot of voices can have plays better. Quickness feels right where I might be more contemplative and allow more time in person. I find that I rely on a teaching persona that is even more about projecting a kind of calm intensity. That might be too subtle for a classroom, but it plays on the small screen. A sort of confined intensity. That fits my teaching persona pretty well, shifting from big to very quiet and from loud to soft as a matter of effect.&#xA;&#xA;It&#39;s still really hard; in fact, it&#39;s certainly harder than any of the equivalent teaching I do face to face. But in some ways the thought process is the same. The technology is an amplifier, and it has some distortion and some areas where it is louder than other ways of doing things. But ultimately it&#39;s still about what kind of persona I need and want to project. &#xA;&#xA;#minimalistedtech #teaching #teachingonzoom #onlineteaching #edtech]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/fcDv2kzJ.jpg" alt=""/></p>

<p>A good friend of mine admitted that he was a pretty piss-poor teacher on zoom. He is, in the classroom, an excellent teacher, in no small part due to a charismatic persona which slides from serious to amused and from hard to soft with ease. It would be easy to imagine that he&#39;s just being tough on himself, but I think he&#39;s actually kind of right. He&#39;s not great on Zoom. Something about his instincts and his habits don&#39;t translate quite right and his inability to sense the physical cues of students distracts and frustrates him.</p>

<p>There is some sort of mismatch there or difficulty in translating teaching persona through the screen.</p>



<p>I feel this myself too, as I tend to be very intentional about when I&#39;m stationary and when I&#39;m moving during a class and particularly during large lecture classes. For zoom classes done in real time, I have done all the small things that can make a difference. This includes simply teaching standing up rather than sitting at a desk while teaching (for which a riser or standing desk is fairly essential). More extreme measures include setting up a green screen and shooting it newsroom-style, with most of my body in the frame. (And, no, that&#39;s not a very minimalist kind of edtech solution.) I even did a bit of outdoors stuff and various ways of interacting with my local environment so I wouldn&#39;t just be a talking head in a box.</p>

<p>Teaching persona is something that I think about a fair amount, in large part because teaching is, for a relatively introverted person like me, always a performance that reflects a truth, but is not equivalent to some personality or truth about me. There&#39;s very conscious daylight between person and persona. In that sense, technology is not some new impediment but rather just another sort of mask and not all that different in the abstract from the kind of teaching mask that one wears regularly. We are not our unvarnished selves in the classroom. I&#39;ve seen many a young teacher be hobbled for a time by too close an identification of self and teaching self.</p>

<p>The move to online technologies has forced us to look in the mirror at our teacing selves. It&#39;s not just that a platform like Zoom (or Google Meet or Teams or whatever you&#39;re using) puts an <em>actual</em> video feed of ourselves back at us so that we can see our every facial tick and blemsih; being confined to the little box on the screen shifts the environment in which our teaching persona does its work. It makes us have to rethink that persona from the ground up. If I had a physical presence in a classroom that was an important part of how I appeared as a teacher, what happens when all those dimensions are cut off? If my voice was something that was the biggest distration in the room, but now students can mute me with the click of a button, what do I really have?</p>

<p>All of these sorts of practical issues of “how to teach with zoom” and the like are explored and adviced and tweeted about often — with lots of good practical tips. A lot of the advice is about the technology, about how to work with x tool or y platform. But I suppose the other way to approach it is to forget the technology for a second and ask the much more basic question about what persona you want to cultivate as a teacher. That&#39;s the essential question and really the precursor that needs to be asked before we even talk about technology.</p>

<p>To put it in a slightly different way, I&#39;m concerned that we jump to questions of platform and specific technologies too quickly and too often. That&#39;s a pretty common technological misstep, endemic to pretty much all areas of technology use and adoption: focusing on the tools without thinking through questions of goals and needs.</p>

<p>For myself, I realized that I had thought through my online teaching persona in small pieces but not in an holistic way. I had considered a certain amount about performance but not thought through, for example, how regular messages from me play out as a function of persona. I tend to want to avoid spamming people&#39;s inboxes with class notices, but in an environment where I&#39;m just a bit more distant physically, out of sight and out of mine, I probably do need to have a bit more present a persona through incredibly regular notifications. (I am, in other words, generally loud and present in a classroom but technologically quiet when it comes to email, mainly because I go through love/hate revelations with email and tend to have a pretty strict schedule of when I check email myself.)</p>

<p>It&#39;s not simply that some aspects of persona translate and others don&#39;t (though this is true) or that we can adjust technology to meet our desired persona as a teacher (which is also true); technology opens up possible elements of your teaching persona that may have been well off the radar before. It also provides tools for shifting from one sort of teaching persona to another. For example, in one kind of teaching that I regularly do, it is a bit more lecture-y to an audience that, for the most part, wants things a bit more lecture-y. So I pull out some presentational goodies for them, with a bit of OBS Studio and something that reads I suppose as a bit of newscast as well as lecture. (Again, that&#39;s not a “minimalist” setup by any means.) For another class, I have it looking like some sort of podcasting den, as if we&#39;re doing a call in show. (This one is a lot easier— think hacker chic.) It isn&#39;t exactly seminar and it certianly isn&#39;t the case that we&#39;re equals, but it is effective in making for a slightly more conversant kind of teching persona. In another setting it&#39;s a bit more like we&#39;re talking in the library and I&#39;m grabbing books from my collection.</p>

<p>I&#39;ve been thinking about this in part through the frame of online writing as well. I read a piece in passing about how, back in the day, an editor of early online editions of an established print publication had to help journalists negotiate the transition to online content. One feature that was very noticeable was that online writing allowed greater latitude for humor. So writers whose tone allowed for that or who explored that succeeded; those who couldn&#39;t overcome their journalistic voice did not.</p>

<p>So too in teaching humor certainly can be effective online. But more than that, there&#39;s something aesthetically diminutive about teaching in online platforms; this smallness resonates better with cetain aspects of one&#39;s teaching persona. Conversationalism, a kind of informality perhaps? I find that playing devil&#39;s advocate seems to work better than ever, as there is a real trend towards group-think, particularly in the chat threads that are churning while we&#39;re having a discusion over a video conferencing platform. I can be a troll in the chat and that works pretty well too; I suspect face to face I&#39;d sound like a jerk. Being fairly open to the twists and turns that a lot of voices can have plays better. Quickness feels right where I might be more contemplative and allow more time in person. I find that I rely on a teaching persona that is even more about projecting a kind of calm intensity. That might be too subtle for a classroom, but it plays on the small screen. A sort of confined intensity. That fits my teaching persona pretty well, shifting from big to very quiet and from loud to soft as a matter of effect.</p>

<p>It&#39;s still really hard; in fact, it&#39;s certainly harder than any of the equivalent teaching I do face to face. But in some ways the thought process is the same. The technology is an amplifier, and it has some distortion and some areas where it is louder than other ways of doing things. But ultimately it&#39;s still about what kind of persona I need and want to project.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:teachingonzoom" class="hashtag"><span>#</span><span class="p-category">teachingonzoom</span></a> <a href="https://minimalistedtech.org/tag:onlineteaching" class="hashtag"><span>#</span><span class="p-category">onlineteaching</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/teaching-persona-and-the-zoomified-classroom</guid>
      <pubDate>Sun, 14 Feb 2021 17:46:56 +0000</pubDate>
    </item>
  </channel>
</rss>