Intentional Forgetting in Edtech
#forgetting #intentionalforgetting #minimalistedtech #edtechminimalism #edtech #privacy #surveillance
We need more forgetful educational technologies. The default mode is always record and preserve first, deal with data issues after that. Privacy policies are not sufficient. We need intentional forgetting in edtech. Here's why.
Do students know or care that technology is always watching?
This past year many of us have been participants in a grand experiment in surveillance, conducting classes on video meeting platforms like Zoom or Teams or Meet and, for good and noble reasons, recording those videos. I have myself recorded all my classes, both online and face to face, going back at least 2 years and then in more piecemeal fashion before that. My main use case was about accessibility, as a way to allow students who needed extra help with note-taking access the class for longer and without having to struggle with their note-taking during class sessions. Video technologies have put in our face questions about data retention and student visibility in online platforms that have been there for a long time. There was, for example, vigorous debate in the pedagogical twitterverse (and elsewhere) about whether students should be required to have their cameras on during online class sessions. It is a defining feature — indeed, a selling feature — of most major edtech products, from LMS to single-purpose tool, that such tools can provide analytics about students. Trends change quickly with tech and, by extension, in edtech. In the space of five years it feels like my students have gone from being completely unaware about the amount of data collected by learning platforms to being moderately aware that their logins are tracked, their reading behavior in online “textbooks” is recorded, their actions in any tool are cataloged and analyzed; in most cases, it didn't seem to faze them. I had been concerned by that, but then the switch to video impacted them more acutely, in that they could feel being watched in real time. In certain ways it was like the ubiquitous passive surveillance was finally visible to them.
Many still didn't care and took it for granted. This reaction concerns me.
Video is in many ways easier to control and be intentional about. We get immediate feedback that we are being watched and it feels like we can address issues more immediately. Turn off the camera. Log in or log out and you're there or not. Things like an LMS or various online textbook platforms are more subtle. I have at various points shown students the kind of data that teachers see about login, page views, and the like in the major LMS platforms. When they are surprised by this, there tends to be a bit of a surprise that I might look at data like that or use that to expose in particular students who have, for example, not logged in to the course recently. I tend to think most of these “analytics” fall in the category of junk statistics. There's some benefit to the big numbers — i.e. students who log in rarely and never or, conversely, those who are spending hours logged on and looking at pages over and over. But that comes through pretty clearly in other areas of their course performance.
So what function do these metrics really serve? Cynically I might say that they are all marketing. It's something that can be sold as a form of “insight” into student behavior. It's a way to claim “engagement.” Hey look, you have all this visibility into what students are doing in online courses. And, sure, there can be ways that you can use that to understand certain trends, e.g. when students tend to complete assignments, how many submit stuff late and so forth. That's all well and good.
But is that benefit worth the effect of having students always being watched? It's not surveillance of the sort that has caused outrage about spying, as with various remote proctoring services. It's just login data. It seems somehow more innocuous. Or, one might argue, it's just the same stuff that happens with an Amazon Kindle or some other commercial product.
I am thinking more and more that those seemingly innocuous bits of data retention are a major problem in edtech. It is the ubiquitous surveillance that we take for granted and is even sold as a benefit of edtech products (“analytics”, “insights”, etc.) But the utility of such surveillance remains unproven while their cost is too often underappreciated. Surveillance, even passive surveillance in the form of metrics and logs and “insights”, realigns power relationships in teaching and makes the process of learning dependent upon observation rather than dialogue and mutual meaning-making.
Thus my question: What would a forgetful edtech look like? What would it look like if you could reset things every day, or every week or every so often, if forgetting were built into technology for learning? Would it even be possible to build such a thing in our current commercial landscape? (Preview on the last question: I suspect not.)
The Forgetful Classroom
Educational technology demands that we think of classrooms as spaces for remembering and forgetting in a way that I suspect we wouldn't have even 10 or 15 years ago.
Imagine for a second that your classroom or lecture hall has cameras not just on you but on every student, all the time, recording, for some future and as yet unknown use, that you are in there and even when you're not. What would you make of that system? Would it seem like a good? Would you want to turn it off? Would you change your behavior and, more importantly, would students change theirs?
(It is true that at various points this ubiquitous use of video has been something that educational theorists and futurists have imagined as a positive. It's a staple of sci-fi to see automated learning that looks like some sort of AI-interactive surveillance system.)
There is, in the physical space, a rhythm and pattern of remembering and forgetting which not only does not translate to digital space but is radically distorted by working in digital space. In situations where you have the same classroom and it isn't shared by others, there's a way in which you can reset that space each day, while keeping reminders of what has been done in the past, visible markers of the previous day and weeks of assignments. In other scenarios, with shared space or in the higher-ed situation where typically you visit a classroom space only a few times each week, there is something of a reset each time.
There are a number of benefits to this pedagogical clean slate and stability, to the fact that your basic surroundings are the same. By contrast, in an online course built into an LMS for example, we might say that the interface is stable in a similar way. But there's a difference in terms of how we are tracked. Physical space doesn't record the traces of your behavior for a third party to dissect. It is not, fundamentally, a space of surveillance.
By contrast, anything online is, by default and by nature, a place of potential surveillance. It is built into the technology. It is the economic model for Big Tech. How can those values help but be baked into educational experiences built on those technologies?
What if, by default, all student interaction on a platform wasn't logged? What if notifications were not the default, if constant automation were not the default? What if you just arrived at a menu and decided what to open up from there. Would that be so bad?
Further, what if everything was opt-in, where you have to trigger a specific action in order to things. There is no passive anything. Nothing happens that a human doesn't trigger intentionally.
We take for granted that educational technologies track our students and, to a certain extent, teachers and anyone who interacts with the system. There are good technical and security reasons to log events and actions in computer systems; however, that paradigm doesn't have to apply at the level that is closest to users. Forgetful behaviors could be built into software. – It would, in my mind, be a great value, to be able to say to students that this platform retains your information for the duration of the class, while we are working, but then it is archived for a brief period and deleted. A clear life cycle for everyone's data goes beyond a simple privacy statement (which is about use and abuse and sharing of data) and foregrounds the pedagogical purpose to which the data is being used. – Default behavior which requires students to commit their data or information to the system rather than passively tracking anything they do. At the user level, the UX is completely about intentional actions, never about passive surveillance. – Drafts by default. In a typical system like an LMS, student actions on the platform are immediate. You submit an assignment — that's the main action. Drafting capabilities lie entirely with teachers. It is a non-trivial technical change but a significant ideological change to make educational platforms places which are about drafts of things. Tools like google docs or others work well in part because of that killer feature of auto-save. As much as that violates my idea about intentionality, I recognize that there is power in the way that that turns everything into a working draft, because it is changeable and editable most times If it were the case that such a tool also forgot about your data at the end and wasn't doing all manner of other unknown things with it along the way, then it would fit the bill (but of course, it's the big G, so they are most certainly taking your data for all sorts of purposes, if not right now, then down the road.)
I suspect their are more, perhaps forms of self-destructing data or other kinds of encryption that allow for more robust privacy. A more extreme example would allow users control of their data through decentralization or federation.
We need to imagine a more forgetful kind of edtech because the alternative is one that continues to walk in lock-step with the world of big tech, where even well-meaning initiatives by the major platforms are still built upon the assumptions that the monetary value of people is in the data that they provide. That is inherent to the platform and the business model of companies like Facebook and Google in toto and many others in significant degrees. (On this last point, I highly recommend reading anything by Jaron Lanier, and especially 10 Arguments for Deleting Your Social Media Accounts Right Now.)
In pedagogical terms, it is an important lesson from the field of memory studies that meaning requires forgetting. The rare individuals who have excessive and near perfect memory (classically, S. in Luria's Mind of a Mnemonist) are crippled in certain ways precisely because of their inability to forget. To understand in the present we must often forget parts of the past. Growth requires both retaining and leaving behind details. It is as true of personal memory as it is of collective memory. Where technologies seem to provide absolute memory, they are in fact failing us as media for making meaning.
Recently in the news: Florida's legislature approves and to some extent encourages students to spy on their teachers in higher ed; conversely, student surveillance company Proctorio continues its ill-advised lawsuit against its critics. These are very different examples, but all part of the continuum of creeping classroom surveillance. It is a bad trend. Learning requires the freedom to make mistakes and room to experiment, support for growth and the messiness that is education. The expectation and acceptance of surveillance, face to face or online through technology, runs counter to those values.
Memory isn't the same thing as privacy
Discussions about edtech surveillance and data logging (and for data retention in most tech platforms generally) are often framed as issues of privacy. Data privacy policies must indicate and spell out how data will be used and for what purposes. While it is true that data retention is a part of that kind of data policy language, these policies are much more about making clear commercial and non-commercial uses of data, a legal butt-covering to make sure that possible use cases have been enumerated should anyone find out at a later date that their particular data has been used in some way that has not been cataloged in the privacy policy.
The more robust policies of the GDPR are something I wish would gain traction in the U.S., but even those guidelines and principles are of a different sort than what I am describing here. (The policies around retaining minimal data necessary are all hugely important steps in the right direction.) But educational technologies requires not simply privacy practices (e.g. FERPA) but best practices around inentional forgetting. We do a disservice to students to have only flimsy and piecemeal protections against their younger selves, their learning selves, leaving behind lasting traces that they don't have control over. That is not a problem only for their future, when they might be embarrassed by something they said on video when younger or when novices. That is a problem of the here and now, because knowing that you leave lasting traces, that you are being recorded changes behaviors.
It's easy to think of things like anonymity or forgetting as negatives, to imagine that we are losing something, taking away something that should otherwise be preserved. And in our present moment where wearable technologies and the quantified self are sold as an obvious good and inevitable direction of health tech, it's easy to think that tracking is inevitable. (Side note: measuring the value of self-quantification is complicated.) I don't think it is inevitable that tracking is the norm forever, particularly, especially if we can articulate alternatives that provide other kinds of value beyond what the marketers and big tech companies want to extract as user data that feeds back into their platforms.
Paradoxically, forgetting is often the most important mechanism for making things meaningful, not just because you know that something must exist in a particular moment, but because you know that the arena for it that matters most is in your own memory, not offloaded onto the computer system. When I think of a forgetful kind of edtech, I think mostly about how forgetting might help create more meaningful experiences with technology. That might involve some combination of temporary anonymization, rolling windows of auto-deleting records, scrubbing information as early as possible. It's paradoxical because computing is, by nature, about storing bits and bytes and then doing something with those stored bits and bytes — it is all built on the mechanics of recording data. So thinking about forgetful edtech, or forgetful computing is an interesting problem. But the field of “intentional forgetting” is an important area of study (see, e.g. http://www.spp1921.de/index.html for the Intentional Forgetting in Organizations projects) and one which may have benefits for education. Many of the insights gleaned from outside education may have larger impacts if implemented for students, providing a direct counterpoint to the seemingly inevitable trend towards educational technologies that record everything, while at the same time opening the way to new, more meaningful educational experiences.
Postscript: This is a first attempt to think through this topic, but there is a lot more to say, particularly in light of the growing literature on intentional forgetting (which I have not referred to much in the above). More coming...