Alternatives to Surveillance Edtech: Students as Publishers

The case against surveillance edtech like Proctorio isn't really about privacy; in pedagogical terms, it's about automation and agency.

There's been a fair amount of press and twitter feather-ruffling about surveillance edtech this past year. Critics of these tools have rightly highlighted dubious data practices and the creepiness and invasiveness of tools like Proctorio, Honorlock and all the rest. (I wrote about it a few months ago here.) But I wonder whether framing it as a matter of privacy is, as an argument to admins, to the public at large, and to many teachers and students, not particularly persuasive, despite being the ethical high ground. After all, in the face of the ubiquitous tracking of Big Tech in general, most people are simply resigned to the fact that their privacy is not a given. Or, rather, the trade off is rendered socially acceptable. Convenience today beats the future and unknown danger of someone else having data or spying, so the thinking goes. Out of sight, out of mind. So long as that invasion of privacy doesn't actually cause a specific problem, then the trade off of privacy for convenience can be rationalized or simply ignored. In America in particular, privacy is an issue which many people react to with resigned passivity.

Now, for some (me included), that privacy argument is compelling. The ethical argument against surveillance edtech is also compelling: how could I promote and facilitate that kind of invasion of privacy to my students? It's just obviously the wrong direction and not worth the cost for the (purported) benefit.

But I think we need a more compelling pragmatic and pedagogical argument. The case against surveillance edtech needs to focus on the fact that we can teach better without that sort of software. The sales pitch of such software is that it solves a particular problem (e.g. cheating); it's a bit like home security services, relying on fear. Proctoring software depends on your fear as a teacher that students are doing something wrong such that you need this “solution” to protect yourself from being tricked.

But let's step back from their framing of the problem as one of cheating or verifying identity. What about the project of learning? Surveillance software is, on the contrary, highly counterproductive to the project of teaching and learning. It is not a “tool” or helper; it is a hindrance and unnecessary barrier. It doesn't actually help students learn or teachers teach.

That kind of argument requires us to reject the assumption that automation is an obvious good and focus instead on technology that promotes and requires the agency of teachers and students. We need not just a criticism of surveillance software as a platform on ethical terms, but also of the software on pedagogical terms. We need to foster classroom design that pushes back against the kinds of pedagogical assumptions underlying this sort of tool. That means, yes, rejecting the notion of “cop shit” in the classroom. It also means doubling down on mechanisms for promoting student and teacher agency. We need to talk about students as active publishers of their own content and we must reject entirely the idea that anyone in a classroom should be subject to passive data gathering in any way.

A distinction here from the everyday world of how we deal with digital files might be worth thinking about. A lot of programs will autosave your work. Word processors, google docs and its imitators, spreadsheets, databases — any number of programs will save a copy of what you are working on it while you work. This is a convenient feature in that context, but it is also one that, subtly, makes us just a bit passive about versions or drafts of our work. On the other hand, if you work with code then you likely use some sort of versioning system like git. Git, whether it's github or a self-hosted alternative, works by forcing a lot of steps on you (at least it will seem that way at first). You have to mark what files you've updated (git add), add a bit saying what's changed and signal your intention to really really add this to the changes being tracked (git commit), and then you might give a command to move the files from your computer to an online folder (git push). There's something very intentional about all of that. It requires just a bit more agency in the process of saving a file.

Now, on the specifics, there are obviously ways to do version control in a more automated way and, conversely, ways to make word processors and all the like less automatic in their updating. My point isn't about those kinds of programs in their details. My point is that we have a choice to adjust the degree of automation and control we want to exert. What most edtech does, and surveillance edtech in particular, is remove or obscure much of that choice and control over automation. Take any platform you use regularly for teaching or in the classroom. How much control do you have over the data it collects? How obvious is it when it is tracking something and when it isn't?

This can be different. Automation is not necessarily the enemy of good pedagogy, but it does require careful thought and design. In the case of materials that students submit to a class or their actions in a class, constant surveillance, whether through proctoring software or, for synchronous classes, with the constant lens of Zoom or its ilk, robs students of a certain degree of agency. (This is in part why, when given the choice, students may be so eager to turn off their phones or give you a view that is not all that great. It is, like anything, a small exertion of control with a system that is not entirely under their control.)

What would it look like if the majority of edtech were data neutral and forgetful by default? What if we only used technologies that not only did not track or store data (hence, ethically “private”), but went a step further so as to be, by default, anonymous until a student or teacher chooses to make something public? Maybe that seems unthinkable, as the default mode for edtech is essentially a list of students, authenticated and identified so that they can be tracked as they submit assignments or complete activities or the like. Indeed, for many edtech products, integrating sign-on is one of the thornier problems to navigate (i.e. does it integrate with an LMS, is it SSO, etc.). Put another way, you have to opt-in, with most platforms, to find ways of allowing students to be anonymous. You can, for example, have a survey in an LMS that might not be graded. Or you might have a wiki that students can collaborate on and not look at the names. Or you might hide names when grading. Or you likely just need to use a different platform (but then we run into FERPA issues of course, because you as a teacher are held to a standard of data sharing that the big tech companies can negotiate their way around.). Edtech is designed so that the identity of students is front and center. It is designed with passive surveillance already primary.

What if students had to “push” their commits? What if every edtech product made student activities, by default, private and self-destructing or forgettable? The difference here is between publishing and surveillance. Students have to publish to an audience they define — the teacher, their class, etc.

This would of course mean very different things at different levels. But in principle what I'm getting at is that we already have ways of thinking about how students “participate” in a class. That is our pedagogical paradigm that needs to stand up against the imposition of a surveillance paradigm.

One other point here. This is ultimately all about grades. Why do we need so desperately to pin down identity and live in fear of cheating? It's about the evaluation and assessment, about the stakes for grading. That's a topic for another post, but so long as we are wedded to outmoded grading patterns as ways to assess learning, then we're stuck with systems that trend towards invasive tracking when it comes to technology.

Finally, could we separate out process and product? Is part of an approach to confronting surveillance tech investing in tools that foster students' working more anonymously or without surveillance as part of the process? Students can work on process knowing that that material is only shared with their intention and permission. They can have multiple products — some failed perhaps, and commit the best one.

There are, currently, some platforms that can be adapted for this sort of anti-surveillance kind of work. I imagine this as one possible use for something like write.as, e.g. create your own blog that is kept private and then submit assignments from that as you work up material. Most tools though are not aimed at educational markets. And many would require self-hosting in some form and so are not really turnkey for educators. Any sort of document sharing or file sharing utilities that require students to opt-in might be useful; so too shared whiteboards or peertube or discourse. Setting up something with cloudron or yunohost might be one sort of way to go.

As with most privacy-related things, right now the solutions would seem to fall on the shoulders of the user and require setting up some sort of shadow IT or secondary infrastructure. That's not a great solution, but until anonymity-first is part of the thinking behind edtech, we're going to struggle with issues of privacy where the surveillors are probably going to have the momentum.

#minimalistedtech #edtech #proctorio #surveillance #edtechminimalism