<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>lms &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:lms</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Thu, 07 May 2026 08:54:52 +0000</pubDate>
    
    <item>
      <title>Invisible Constraints in the Classroom</title>
      <link>https://minimalistedtech.org/invisible-constraints-in-the-classroom?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;Invisible Constraints in the Classroom&#xA;&#xA;  Constraints expose the workings of technology. Errors and failures are invitations for critical assessment. Even if the technology does not work perfectly, the ways in which it failed, the exposing of invisible constraints, can prove successful. &#xA;&#xA;I have a perverse love of technological constraint. Constraint can give rise to innovation, inspiration, and, in the aesthetic of minimalist computing more generally, elegant solutions). But there are plenty of times where technology has constraints that we can&#39;t see. A lot of edtech is like this, from the user side, because the marketing is almost always about automating and making things easier. Edtech marketers constantly hide the constraints of their products from users. &#xA;&#xA;This is a mistake and a missed learning opportunity. &#xA;&#xA;!--more--&#xA;&#xA;All technologies have constraints. This isn&#39;t surprising; nor should it be something to run away from. The humble pencil cannot make indelible marks on every surface; but it can offer a portable and cheap writing system widely available. The pencil can&#39;t stay sharp indefinitely; but it can be renewed easily with the proper equipment. We take it for granted that we can see, transparently, some of the limitations of humble technologies while at the same time being well in control of addressing those constraints appropriately. &#xA;&#xA;When edtech is a black box, the job of examining constraints and understanding constraints is made more difficult. The myth of the frictionless, automated mechanical aids -- more the ever-shifting goalposts of the future than any real vision for the present -- skews our view of edtech in ways that make it seem both more fail-able and more powerful than it is. Particularly where we are prevented from seeing constraints, it can be surprising and frustrating to find barriers and walls where they might not have been expected. &#xA;&#xA;One common place to find unseen (and unexepected) constraints are in LMS gradebooks. The gradebook in Canvas has for many years looked, on the outside, like a spreadsheet. But under the hood it is not in fact a spreadsheet of any sort that might be familiar as a stand-alone program. There&#39;s logic and routines running behind the scenes and constraints on what you can enter in what otherwise appear to be cells. There are sound reasons for this system, but also many unexpected behaviors. For example, I had at one point set up my grading to involve negative points (for perverse reasons which need not be elaborated here). But there are strange behaviors below 0, as negative points in Canvas would round unexpectedly and also could only go to a maximum of -1. (So far as I recall. It may have been -10, but it certainly wasn&#39;t a full spread. One thread on this is here). So I couldn&#39;t do my grading system in Canvas in any easy way. I ran into similar problems with anything that was &#34;non-standard&#34; as a grading system, particularly when I tried to do various sorts of completion or spec grading. I learned quickly that under the hood there were of course various constraints that resulted from the way Canvas was set up. It was not only not a spreadsheet (which was fairly clear before) but also a system that would change values in ways that were necessary for the various automatic calculations and totals that the system was set up to prepare for you automatically. &#xA;&#xA;One could surely tell such tales of other LMS-es. (Blackboard, when I used that for many years, was most certainly prone to various unseen and unforeseen constraints in the gradebook.)&#xA;&#xA;This is not just a matter of transparency in code or in documentation. Rather, I think we should approach edtech attuned to the inevitability of invisible and hidden constraints. Further, when we find them, we should expose them and integrate those constraints into our teaching whenever illustrative.&#xA;&#xA;Any time a program or platform won&#39;t allow you to do something, that&#39;s a cause for reflection. Why is it that this expected behavior is not there? Don&#39;t just change to do it the way they say to do it. Why is it that you are being directed in this way? How is the platform modifying your behavior and actions?&#xA;&#xA;Constraints expose the workings of technology. Errors and failures are invitations for critical assessment. Even if the technology does not work perfectly, the ways in which it failed, the exposing of invisible constraints, can prove successful. &#xA;&#xA;In a large class I taught using a multi-platform clicker-like system, it worked well some time; but then we started having a lot of problems with people&#39;s devices not logging in. The culprit was well beyond my control, in the wifi coverage of a very large lecture hall. This particular hall had an analogous problem with acoustics. Though beautiful as a space, it was crap for teaching. In the corners of the room students could barely hear much of anything, even with the full room sound system. The high ceilings made everything echo-y and hollow for most students past the 3rd row, and the distance between the lecture area and the rows of seats were significant. In short, the space was simply too large and spacious. Both failures were, in part, a source of frustration and I had to adjust my teaching accordingly. More small groups, more of me moving around the room. Both of those were positive exercises and made for dynamic lessons in many ways. But it was also a prompt for discussing and examining space, at both a human and a technological level. We didn&#39;t spend long on it, but it became a tangible illustration of big ideas from the course. And it framed the technological problems in a meaningful way so that we could move one quickly and not get hung up on the fact that we were constrained in particular ways. &#xA;&#xA;There&#39;s a purely pragmatic advantage too. At any level, if you frame tech failure as cause for reflection rather than frustration, then you will always look (and likely feel) more technologically adept in a classroom. You will seem to be transmuting lemons to lemonade. Where technology breaks your well-planned lesson, you assert your will again. Transparency about technological errors, and a spirit of troubleshooting and problem-solving, can model for students both how to work with technologies and how to work with failure. (All educational technology will, let me stress, always fail for x percent of a class y percent of the time, where x and y are both non-zero positive numbers.)&#xA;&#xA;Hidden constraints are out there. Our job is to expose them.&#xA;&#xA;#minimalistedtech #edtechminimalism #minimalistcomputing #lms #canvas&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<h1 id="invisible-constraints-in-the-classroom" id="invisible-constraints-in-the-classroom">Invisible Constraints in the Classroom</h1>

<p><img src="https://i.snap.as/TN33u8wb.jpg" alt=""/></p>

<blockquote><p>Constraints expose the workings of technology. Errors and failures are invitations for critical assessment. Even if the technology does not work perfectly, the ways in which it failed, the exposing of invisible constraints, can prove successful.</p></blockquote>

<p>I have a perverse love of technological constraint. Constraint can give rise to innovation, inspiration, and, in the aesthetic of <a href="https://en.wikipedia.org/wiki/Minimalism_(computing)">minimalist computing more generally, elegant solutions</a>. But there are plenty of times where technology has constraints that we can&#39;t see. A lot of edtech is like this, from the user side, because the marketing is almost always about automating and making things easier. Edtech marketers constantly hide the constraints of their products from users.</p>

<p>This is a mistake and a missed learning opportunity.</p>



<p>All technologies have constraints. This isn&#39;t surprising; nor should it be something to run away from. The humble pencil cannot make indelible marks on every surface; but it can offer a portable and cheap writing system widely available. The pencil can&#39;t stay sharp indefinitely; but it can be renewed easily with the proper equipment. We take it for granted that we can see, transparently, some of the limitations of humble technologies while at the same time being well in control of addressing those constraints appropriately.</p>

<p>When edtech is a black box, the job of examining constraints and understanding constraints is made more difficult. The myth of the frictionless, automated mechanical aids — more the ever-shifting goalposts of the future than any real vision for the present — skews our view of edtech in ways that make it seem both more fail-able and more powerful than it is. Particularly where we are prevented from seeing constraints, it can be surprising and frustrating to find barriers and walls where they might not have been expected.</p>

<p>One common place to find unseen (and unexepected) constraints are in LMS gradebooks. The gradebook in Canvas has for many years looked, on the outside, like a spreadsheet. But under the hood it is not in fact a spreadsheet of any sort that might be familiar as a stand-alone program. There&#39;s logic and routines running behind the scenes and constraints on what you can enter in what otherwise appear to be cells. There are sound reasons for this system, but also many unexpected behaviors. For example, I had at one point set up my grading to involve negative points (for perverse reasons which need not be elaborated here). But there are strange behaviors below 0, as negative points in Canvas would round unexpectedly and also could only go to a maximum of -1. (So far as I recall. It may have been -10, but it certainly wasn&#39;t a full spread. One thread on this is <a href="https://community.canvaslms.com/t5/Question-Forum/Rubric-with-Negative-Points/td-p/64021">here</a>). So I couldn&#39;t do my grading system in Canvas in any easy way. I ran into similar problems with anything that was “non-standard” as a grading system, particularly when I tried to do various sorts of completion or spec grading. I learned quickly that under the hood there were of course various constraints that resulted from the way Canvas was set up. It was not only not a spreadsheet (which was fairly clear before) but also a system that would change values in ways that were necessary for the various automatic calculations and totals that the system was set up to prepare for you automatically.</p>

<p>One could surely tell such tales of other LMS-es. (Blackboard, when I used that for many years, was most certainly prone to various unseen and unforeseen constraints in the gradebook.)</p>

<p>This is not just a matter of transparency in code or in documentation. Rather, I think we should approach edtech attuned to the inevitability of invisible and hidden constraints. Further, when we find them, we should expose them and integrate those constraints into our teaching whenever illustrative.</p>

<p>Any time a program or platform won&#39;t allow you to do something, that&#39;s a cause for reflection. Why is it that this expected behavior is not there? Don&#39;t just change to do it the way they say to do it. Why is it that you are being directed in this way? How is the platform modifying your behavior and actions?</p>

<p>Constraints expose the workings of technology. Errors and failures are invitations for critical assessment. Even if the technology does not work perfectly, the ways in which it failed, the exposing of invisible constraints, can prove successful.</p>

<p>In a large class I taught using a multi-platform clicker-like system, it worked well some time; but then we started having a lot of problems with people&#39;s devices not logging in. The culprit was well beyond my control, in the wifi coverage of a very large lecture hall. This particular hall had an analogous problem with acoustics. Though beautiful as a space, it was crap for teaching. In the corners of the room students could barely hear much of anything, even with the full room sound system. The high ceilings made everything echo-y and hollow for most students past the 3rd row, and the distance between the lecture area and the rows of seats were significant. In short, the space was simply too large and spacious. Both failures were, in part, a source of frustration and I had to adjust my teaching accordingly. More small groups, more of me moving around the room. Both of those were positive exercises and made for dynamic lessons in many ways. But it was also a prompt for discussing and examining space, at both a human and a technological level. We didn&#39;t spend long on it, but it became a tangible illustration of big ideas from the course. And it framed the technological problems in a meaningful way so that we could move one quickly and not get hung up on the fact that we were constrained in particular ways.</p>

<p>There&#39;s a purely pragmatic advantage too. At any level, if you frame tech failure as cause for reflection rather than frustration, then you will always look (and likely feel) more technologically adept in a classroom. You will seem to be transmuting lemons to lemonade. Where technology breaks your well-planned lesson, you assert your will again. Transparency about technological errors, and a spirit of troubleshooting and problem-solving, can model for students both how to work with technologies and how to work with failure. (All educational technology will, let me stress, always fail for x percent of a class y percent of the time, where x and y are both non-zero positive numbers.)</p>

<p>Hidden constraints are out there. Our job is to expose them.</p>

<p><a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a> <a href="https://minimalistedtech.org/tag:edtechminimalism" class="hashtag"><span>#</span><span class="p-category">edtechminimalism</span></a> <a href="https://minimalistedtech.org/tag:minimalistcomputing" class="hashtag"><span>#</span><span class="p-category">minimalistcomputing</span></a> <a href="https://minimalistedtech.org/tag:lms" class="hashtag"><span>#</span><span class="p-category">lms</span></a> <a href="https://minimalistedtech.org/tag:canvas" class="hashtag"><span>#</span><span class="p-category">canvas</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/invisible-constraints-in-the-classroom</guid>
      <pubDate>Mon, 25 Jan 2021 19:41:33 +0000</pubDate>
    </item>
    <item>
      <title>Teaching Rant of the Day: auto-grading and sloppy test design</title>
      <link>https://minimalistedtech.org/teaching-rant-of-the-day-auto-grading-and-sloppy-test-design?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;Some problems of digital tools we take for granted in teaching emerge as side effects of usage; others are due to mis-using the tool. Auto-grading and quiz (in)capabilities of modern LMS-es fall into both categories: side effects and encouraging bad test design habits. &#xA;&#xA;!--more--&#xA;&#xA;Here&#39;s what I mean:&#xA;&#xA;Auto-grading byproducts: As with so many features of educational technology, trouble comes from trying to force our practice into the affordances of the product. Auto-grading doesn&#39;t work universally on any type of content of course. It works best for situations where there is a one to one matching, where it&#39;s a simple lookup. Multiple choice is the easy case. Fill in the blank is a step messier, where you either have to supply many possible answers (anticipating what your students might say that is right enough even if not letter for letter identical to your answer) or specify some degree of fuzziness in the matching (e.g. don&#39;t count capitalization). Even fill in the blank phrases or short sentence responses, beyond a single word or two, are bound to add time to a teacher&#39;s workload either on the frontend in laying out the questions (and potential responses) or on the flipside when reviewing grades. And in the meantime, students are most certainly going to be annoyed/confused/angry to see a correct answer marked incorrectly because of a technological inability to match complex answers. &#xA;&#xA;Let me illustrate with a use case from a language class that I saw recently. The question asked for a translation of a simple sentence. But of course there are minor variations in translation that are substantively irrelevant but, for a computer, no longer identical answers. The teacher had to go through the quiz answers by hand anyway in order to correct things that were auto-graded incorrectly. In the meantime, the apparently instantaneous feedback to students was suspect until reviewed. Why add the extra step? What has been gained here?&#xA;&#xA;Fostering bad habits in test design: When all you&#39;ve got is a hammer, everything looks like a nail, right?  This is particularly true with auto-graded test builders that are constructed primarily as means to administer multiple choice assessments. Even when given the option of multiple test types, they are all essentially forms of matching a student input to a pre-determined teacher-entered &#34;answer&#34;. That may seem like it is simply the default thing that a &#34;test&#34; does, but if we step back and ask about assessment, well, then, no, this is only one way -- a very traditional way perhaps -- of doing assessment. It is a way of assessing a particular type of knowledge, usually atomistic, and one not particularly suited to assessing skills or synthetic knowledge. Done well, multiple choice questions can serve a wide variety of uses and I wouldn&#39;t deny that carefully thought out assessments of this sort can do a lot more than simply have students regurgitate some content. The problem is that there is a lot of work that has to be done, a lot of intentional and careful planning, that turns complex questions and issues into something that can be assessed by multiple choice tests. &#xA;&#xA;To continue the case of the language class. For a matching exercise, the autograder had been set up with a one to one key. Each item had exactly one answer. But for this content (Latin declensions), many items had multiple possible answers. So students again could put down a correct answer and have it marked wrong. At that point, how exactly does the time invested in wrangling the quiz into this digital form have a benefit over asking everything as an open-ended question evaluated by the teacher. I suspect that this particular teacher would have spent less time overall by simply asking the questions and having students submit a list of their answers on paper, in a text document, or as text fields in the online quiz. &#xA;&#xA;Fostering bad habits in students: Students get used to the idea that their test-taking software is unreliable and prone to errors, that they can do things right and have it counted as wrong. It also puts teacher errors right in their face. Nothing good comes of that, not simply because of undermining trust at all levels, but also because it gets students into a habit of failure. It may not be their failure, but it reinforces the idea that the end result of any exercise is some sort of not working. &#xA;&#xA;Here I would take an example from music teaching. I&#39;ve watched plenty of younger students learning an instrument struggle with a passage. (And, ok, I have done this plenty too.) They keep going too fast or simply blundering around with a complex bit. But they play it messed up, maybe curse at themselves, and then move on to the stuff that sounds better. As a very good teacher of mine pointed out, that means you&#39;re just practicing what it&#39;s like to play the passage badly. You want to practice it going well as much as possible, even if that means doing it at an incredibly slow speed or doing it in some very small bit. The lesson is simple. Don&#39;t practice your mistakes. That just gets you used to doing it wrong. Fix the mistakes by taking the smallest part that you can do and doing it slowly. Then repeat it, speed it up, and eventually enlarge it. Practice what it feels like for things to work. &#xA;&#xA;Frictionless technology&#xA;I suppose that&#39;s my concern with auto-grading: the promise of frictionless technology followed always by frustration of the technology in practice. Certainly the hype doesn&#39;t live up to reality. But there are other effects of now ubiquitous technologies like auto-grading and online quizzes. These are all forms of technological friction, forms that are difficult to control and alleviate. Maybe these frustrations are so obvious that everyone just takes it for granted. But that&#39;s part of the problem. Using such tools is an expectation, a norm, a habit. We need to question that norm and habit at every turn, not because of some luddite impulse to throw it out, but because it needs to be used in more thoughtful ways or, for many cases, not at all.  &#xA;&#xA;#assessment #autograding #lms #minimalistedtech]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/Wa2eGExx.jpg" alt=""/></p>

<p>Some problems of digital tools we take for granted in teaching emerge as side effects of usage; others are due to mis-using the tool. Auto-grading and quiz (in)capabilities of modern LMS-es fall into both categories: side effects and encouraging bad test design habits.</p>



<p>Here&#39;s what I mean:</p>

<p><em>Auto-grading byproducts</em>: As with so many features of educational technology, trouble comes from trying to force our practice into the affordances of the product. Auto-grading doesn&#39;t work universally on any type of content of course. It works best for situations where there is a one to one matching, where it&#39;s a simple lookup. Multiple choice is the easy case. Fill in the blank is a step messier, where you either have to supply many possible answers (anticipating what your students might say that is right enough even if not letter for letter identical to your answer) or specify some degree of fuzziness in the matching (e.g. don&#39;t count capitalization). Even fill in the blank phrases or short sentence responses, beyond a single word or two, are bound to add time to a teacher&#39;s workload either on the frontend in laying out the questions (and potential responses) or on the flipside when reviewing grades. And in the meantime, students are most certainly going to be annoyed/confused/angry to see a correct answer marked incorrectly because of a technological inability to match complex answers.</p>

<p>Let me illustrate with a use case from a language class that I saw recently. The question asked for a translation of a simple sentence. But of course there are minor variations in translation that are substantively irrelevant but, for a computer, no longer identical answers. The teacher had to go through the quiz answers by hand anyway in order to correct things that were auto-graded incorrectly. In the meantime, the apparently instantaneous feedback to students was suspect until reviewed. Why add the extra step? What has been gained here?</p>

<p><em>Fostering bad habits in test design</em>: When all you&#39;ve got is a hammer, everything looks like a nail, right?  This is particularly true with auto-graded test builders that are constructed primarily as means to administer multiple choice assessments. Even when given the option of multiple test types, they are all essentially forms of matching a student input to a pre-determined teacher-entered “answer”. That may seem like it is simply the default thing that a “test” does, but if we step back and ask about assessment, well, then, no, this is only one way — a very traditional way perhaps — of doing assessment. It is a way of assessing a particular type of knowledge, usually atomistic, and one not particularly suited to assessing skills or synthetic knowledge. Done well, multiple choice questions can serve a wide variety of uses and I wouldn&#39;t deny that carefully thought out assessments of this sort can do a lot more than simply have students regurgitate some content. The problem is that there is a lot of work that has to be done, a lot of intentional and careful planning, that turns complex questions and issues into something that can be assessed by multiple choice tests.</p>

<p>To continue the case of the language class. For a matching exercise, the autograder had been set up with a one to one key. Each item had exactly one answer. But for this content (Latin declensions), many items had multiple possible answers. So students again could put down a correct answer and have it marked wrong. At that point, how exactly does the time invested in wrangling the quiz into this digital form have a benefit over asking everything as an open-ended question evaluated by the teacher. I suspect that this particular teacher would have spent less time <em>overall</em> by simply asking the questions and having students submit a list of their answers on paper, in a text document, or as text fields in the online quiz.</p>

<p><em>Fostering bad habits in students</em>: Students get used to the idea that their test-taking software is unreliable and prone to errors, that they can do things right and have it counted as wrong. It also puts teacher errors right in their face. Nothing good comes of that, not simply because of undermining trust at all levels, but also because it gets students into a habit of failure. It may not be their failure, but it reinforces the idea that the end result of any exercise is some sort of not working.</p>

<p>Here I would take an example from music teaching. I&#39;ve watched plenty of younger students learning an instrument struggle with a passage. (And, ok, I have done this plenty too.) They keep going too fast or simply blundering around with a complex bit. But they play it messed up, maybe curse at themselves, and then move on to the stuff that sounds better. As a very good teacher of mine pointed out, that means you&#39;re just practicing what it&#39;s like to play the passage badly. You want to practice it going well as much as possible, even if that means doing it at an incredibly slow speed or doing it in some very small bit. The lesson is simple. Don&#39;t practice your mistakes. That just gets you used to doing it wrong. Fix the mistakes by taking the smallest part that you can do and doing it slowly. Then repeat it, speed it up, and eventually enlarge it. Practice what it feels like for things to work.</p>

<p><em>Frictionless technology</em>
I suppose that&#39;s my concern with auto-grading: the promise of frictionless technology followed always by frustration of the technology in practice. Certainly the hype doesn&#39;t live up to reality. But there are other effects of now ubiquitous technologies like auto-grading and online quizzes. These are all forms of technological friction, forms that are difficult to control and alleviate. Maybe these frustrations are so obvious that everyone just takes it for granted. But that&#39;s part of the problem. Using such tools is an expectation, a norm, a habit. We need to question that norm and habit at every turn, not because of some luddite impulse to throw it out, but because it needs to be used in more thoughtful ways or, for many cases, not at all.</p>

<p><a href="https://minimalistedtech.org/tag:assessment" class="hashtag"><span>#</span><span class="p-category">assessment</span></a> <a href="https://minimalistedtech.org/tag:autograding" class="hashtag"><span>#</span><span class="p-category">autograding</span></a> <a href="https://minimalistedtech.org/tag:lms" class="hashtag"><span>#</span><span class="p-category">lms</span></a> <a href="https://minimalistedtech.org/tag:minimalistedtech" class="hashtag"><span>#</span><span class="p-category">minimalistedtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/teaching-rant-of-the-day-auto-grading-and-sloppy-test-design</guid>
      <pubDate>Mon, 14 Dec 2020 18:35:58 +0000</pubDate>
    </item>
    <item>
      <title>LMS failure points</title>
      <link>https://minimalistedtech.org/lms-failure-points?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;These past months have been a stress test for academic technologies. Videoconferencing tools and LMS systems have had to do the maximum, especially for remote or hybrid learning, but across the board as things that might have been done face to face were offloaded to technology. Both in my own teaching and in watching my kids&#39; experience in K-12, there are some common threads of failure.&#xA;&#xA;!--more--&#xA;&#xA;Unclear guidelines for best practices: One area where teachers have been let down (I suspect) by the guidance they&#39;ve been given has been in the area of technological best practices. And I should say, I think this is simply one of those things that easily falls to the side when other matters press; I am not suggesting fault or blame, simply that this is something that needs to be done better. It&#39;s a perennial information technology problem. No one uses technology the way it was intended, despite lots of training; and there often aren&#39;t significant incentives to level up with technological tools.  My own students have long complained about the way that their instructors use the shared LMS (Canvas in this case) in wildly different ways. So they have a range of expectations to meet depending on the instructor. They have expectations of finding material in one place but then find it elsewhere; some faculty post due dates for when they should be started rather than completed (utterly strange). For my own kids (using Schoology) some teachers put assignments in the calendar, others post things via announcements, still others post pages in their main area. It is bewildering where to find things. &#xA;&#xA;The key thing here is that it isn&#39;t just choice. It&#39;s a failure of the software (which is opinionated but not nearly opinionated enough) and lack of guidance as to why one might use one method over the other. Consequently it&#39;s a patchwork. &#xA;&#xA;This is a familiar tech phenomenon. Everyone is reduced to the lowest common denominator when using an app or a platform as part of a team by either the highest status individual&#39;s use habits or by the team member who can only use the platform in one particular way. It happens with email chains when a member of the team can&#39;t use other tools to, for example, schedule meetings. It happens when working with shared documents. One person wants it in email and doesn&#39;t want to edit online and thus the whole team is forced to use that method. And it happens when organizations go looking for new tech tools to improve their workflow or some other process. How often is the existing software perfectly capable if only everyone would use it fully and expertly? &#xA;&#xA;The other issue here is that these practices are often formulated ad hoc. As things come up teachers are forced to find a place for a review sheet or a practice test or something like that and suddenly the well-manicured course site grows weeds in unexpected places. &#xA;&#xA;In theory the fix is simple: a) have an information plan from the beginning and b) put everything into assignments with due dates (because that&#39;s how most LMS-es expect content). Putting everything into assignments gives you fixed places for things and can easily be linked to static pages of content. So you can still have, for example, an exam review page that sits out there. But then when you put the assignment that says &#34;Exam&#34; (and it should be an assignment, not just a calendar notice), link the materials there. There are other solutions certainly and other ways to be consistent; it may just help to get outliers into one of a few obvious patterns of posting content. &#xA;&#xA;On the other hand, I&#39;d love to see an LMS that doesn&#39;t take it for granted that everyone just gets the structure of a class. What would it look like if an LMS was a bit more self-aware and verbose about how instructors have laid out material? I do this myself with a landing page that I create as the entry point for the class. It&#39;s a map of sorts to where all the content is kept in the LMS. It&#39;s easy to create and students always comment on how useful it is.&#xA;&#xA;Young students don&#39;t automatically know how to navigate an LMS: This is largely a K-5 issue, but has some applicability at all levels. While in theory the structure of an LMS is simple, in practice it is a mess of classes and activities and special activities. Younger students, no matter how facile with technology for their age, don&#39;t spend their days working with office technologies. &#34;Simple&#34; things like retrieving a document from a specific place in the LMS are not obvious; printing documents is not obvious; and saving a document as a pdf to email back to a teacher is most certainly not obvious. (Let&#39;s not even get into the whole issue of access to technology. One of the things I&#39;ve found most striking in my own teaching is the number of apparently well-off undergraduate students whose only &#34;computer&#34; is a smartphone. At the K-12 level, expecting even access to a smartphone is, as has been clear across the country this Fall, most certainly not to be taken for granted.) &#xA;&#xA;There are tools that work with this, but the Fall has, I think, exposed major failings of most LMS platforms for young students. Solutions might involve (and have involved) valiant efforts to teach students how to use these technologies, but that seems a waste of precious time. Perhaps simplifying what gets sent and how it gets sent is a possible improvement. The LMS in this case invites sloppy practice. Young students who need simple and clear structure get in an LMS a menu of options and places to get lost. Ditch it. &#xA;&#xA;Parental oversight of all scores all the time is not a good thing: I have never liked the helicopter aspect of LMS systems. There&#39;s something powerful about periodic review of things like grades. Beyond the fact that LMS systems burden teachers with the expectation of giving constantly-accessible feedback, for K-12 students in particular the way that the LMS robs them of agency and makes them constantly accountable to parental oversight or questioning runs counter to a pedagogical need to foster the freedom required for learning. Learning requires mistakes and it usually requires failures. The twist this term is that for remote learning parents have had an even more invasive window into the minute by minute drama of the actual classroom, supported and fostered by parental access to the LMS where we too as parents can view, download, and explore all the class materials. &#xA;&#xA;I know some parents like this access. Parents can help their kids and intervene as they think necessary. But I see no good in this. It stresses teachers who now have another audience and have to talk, in a way, both to students and parents at the same time. It can stress students and make them dependent where they otherwise might have had to be responsible themselves. It is dis-empowering at every level. &#xA;&#xA;Do you need an LMS?&#xA;I am probably an outlier. But I wonder constantly nowadays about whom the LMS serves. Is it for teachers? For students? The primary beneficiaries seem to me to be administrators and, for K-12, parents who need the salve of constant access to their students&#39; records. I might (grudgingly) admit that the selling point of an LMS is in bringing together materials from multiple classes at one access point. &#xA;&#xA;But is it worth all that hassle? If the purpose is to make materials available remotely, then you need two things: secure storage and, perhaps, some version of &#34;posting&#34; assignments. In that sense, strictly speaking for straightforward functional needs of teachers and students, is a password-protected dropbox/pcloud/cloud storage drive enough? &#xA;&#xA;But... &#34;Security! Analytics! We need to track these things! FERPA!&#34; one might hear.&#xA;&#xA;Bullshit. It&#39;s about goals. If your goal is to teach, then an LMS is a lot of extra hassle that you don&#39;t need to do that job. If you are a student and your goal is to learn, then an LMS is a bunch of infrastructure and clicking in between you and the content and communication you want. &#xA;&#xA;#lms #k12 #highered #minimaledtech ]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/Qd0twTeu.jpg" alt=""/></p>

<p>These past months have been a stress test for academic technologies. Videoconferencing tools and LMS systems have had to do the maximum, especially for remote or hybrid learning, but across the board as things that might have been done face to face were offloaded to technology. Both in my own teaching and in watching my kids&#39; experience in K-12, there are some common threads of failure.</p>



<p><em>Unclear guidelines for best practices</em>: One area where teachers have been let down (I suspect) by the guidance they&#39;ve been given has been in the area of technological best practices. And I should say, I think this is simply one of those things that easily falls to the side when other matters press; I am not suggesting fault or blame, simply that this is something that needs to be done better. It&#39;s a perennial information technology problem. No one uses technology the way it was intended, despite lots of training; and there often aren&#39;t significant incentives to level up with technological tools.  My own students have long complained about the way that their instructors use the shared LMS (Canvas in this case) in wildly different ways. So they have a range of expectations to meet depending on the instructor. They have expectations of finding material in one place but then find it elsewhere; some faculty post due dates for when they should be started rather than completed (utterly strange). For my own kids (using Schoology) some teachers put assignments in the calendar, others post things via announcements, still others post pages in their main area. It is bewildering where to find things.</p>

<p>The key thing here is that it isn&#39;t just choice. It&#39;s a failure of the software (which is opinionated but not nearly opinionated enough) and lack of guidance as to why one might use one method over the other. Consequently it&#39;s a patchwork.</p>

<p>This is a familiar tech phenomenon. Everyone is reduced to the lowest common denominator when using an app or a platform as part of a team by either the highest status individual&#39;s use habits or by the team member who can only use the platform in one particular way. It happens with email chains when a member of the team can&#39;t use other tools to, for example, schedule meetings. It happens when working with shared documents. One person wants it in email and doesn&#39;t want to edit online and thus the whole team is forced to use that method. And it happens when organizations go looking for new tech tools to improve their workflow or some other process. How often is the existing software perfectly capable if only everyone would use it fully and expertly?</p>

<p>The other issue here is that these practices are often formulated ad hoc. As things come up teachers are forced to find a place for a review sheet or a practice test or something like that and suddenly the well-manicured course site grows weeds in unexpected places.</p>

<p>In theory the fix is simple: a) have an information plan from the beginning and b) put everything into assignments with due dates (because that&#39;s how most LMS-es expect content). Putting everything into assignments gives you fixed places for things and can easily be linked to static pages of content. So you can still have, for example, an exam review page that sits out there. But then when you put the assignment that says “Exam” (and it should be an assignment, not just a calendar notice), link the materials there. There are other solutions certainly and other ways to be consistent; it may just help to get outliers into one of a few obvious patterns of posting content.</p>

<p>On the other hand, I&#39;d love to see an LMS that doesn&#39;t take it for granted that everyone just gets the structure of a class. What would it look like if an LMS was a bit more self-aware and verbose about how instructors have laid out material? I do this myself with a landing page that I create as the entry point for the class. It&#39;s a map of sorts to where all the content is kept in the LMS. It&#39;s easy to create and students always comment on how useful it is.</p>

<p><em>Young students don&#39;t automatically know how to navigate an LMS</em>: This is largely a K-5 issue, but has some applicability at all levels. While in theory the structure of an LMS is simple, in practice it is a mess of classes and activities and special activities. Younger students, no matter how facile with technology for their age, don&#39;t spend their days working with office technologies. “Simple” things like retrieving a document from a specific place in the LMS are not obvious; printing documents is not obvious; and saving a document as a pdf to email back to a teacher is most certainly not obvious. (Let&#39;s not even get into the whole issue of access to technology. One of the things I&#39;ve found most striking in my own teaching is the number of apparently well-off undergraduate students whose only “computer” is a smartphone. At the K-12 level, expecting even access to a smartphone is, as has been clear across the country this Fall, most certainly not to be taken for granted.)</p>

<p>There are tools that work with this, but the Fall has, I think, exposed major failings of most LMS platforms for young students. Solutions might involve (and have involved) valiant efforts to teach students how to use these technologies, but that seems a waste of precious time. Perhaps simplifying what gets sent and how it gets sent is a possible improvement. The LMS in this case invites sloppy practice. Young students who need simple and clear structure get in an LMS a menu of options and places to get lost. Ditch it.</p>

<p><em>Parental oversight of all scores all the time is not a good thing</em>: I have never liked the helicopter aspect of LMS systems. There&#39;s something powerful about periodic review of things like grades. Beyond the fact that LMS systems burden teachers with the expectation of giving constantly-accessible feedback, for K-12 students in particular the way that the LMS robs them of agency and makes them constantly accountable to parental oversight or questioning runs counter to a pedagogical need to foster the freedom required for learning. Learning requires mistakes and it usually requires failures. The twist this term is that for remote learning parents have had an even more invasive window into the minute by minute drama of the actual classroom, supported and fostered by parental access to the LMS where we too as parents can view, download, and explore all the class materials.</p>

<p>I know some parents like this access. Parents can help their kids and intervene as they think necessary. But I see no good in this. It stresses teachers who now have another audience and have to talk, in a way, both to students and parents at the same time. It can stress students and make them dependent where they otherwise might have had to be responsible themselves. It is dis-empowering at every level.</p>

<h1 id="do-you-need-an-lms" id="do-you-need-an-lms">Do you need an LMS?</h1>

<p>I am probably an outlier. But I wonder constantly nowadays about whom the LMS serves. Is it for teachers? For students? The primary beneficiaries seem to me to be administrators and, for K-12, parents who need the salve of constant access to their students&#39; records. I might (grudgingly) admit that the selling point of an LMS is in bringing together materials from multiple classes at one access point.</p>

<p>But is it worth all that hassle? If the purpose is to make materials available remotely, then you need two things: secure storage and, perhaps, some version of “posting” assignments. In that sense, strictly speaking for straightforward functional needs of teachers and students, is a password-protected dropbox/pcloud/cloud storage drive enough?</p>

<p>But... “Security! Analytics! We need to track these things! FERPA!” one might hear.</p>

<p>Bullshit. It&#39;s about goals. If your goal is to <em>teach</em>, then an LMS is a lot of extra hassle that you don&#39;t need to do that job. If you are a student and your goal is to <em>learn</em>, then an LMS is a bunch of infrastructure and clicking in between you and the content and communication you want.</p>

<p><a href="https://minimalistedtech.org/tag:lms" class="hashtag"><span>#</span><span class="p-category">lms</span></a> <a href="https://minimalistedtech.org/tag:k12" class="hashtag"><span>#</span><span class="p-category">k12</span></a> <a href="https://minimalistedtech.org/tag:highered" class="hashtag"><span>#</span><span class="p-category">highered</span></a> <a href="https://minimalistedtech.org/tag:minimaledtech" class="hashtag"><span>#</span><span class="p-category">minimaledtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/lms-failure-points</guid>
      <pubDate>Fri, 11 Dec 2020 15:16:22 +0000</pubDate>
    </item>
  </channel>
</rss>