Quantcast
Channel: Hack Education
Viewing all 875 articles
Browse latest View live

Robot Teachers, Racist Algorithms, and Disaster Pedagogy

$
0
0

I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim's class "Race Before Race: Premodern Critical Race Studies." Here's a bit of what I said...

Thank you for inviting me to speak to your class this evening. I am sorry that we're all kicking off fall term online again — well, online is better than dead, of course. I know that this probably isn't how you imagined college would look. But then again, it's worth pausing and thinking about what we imagine school should look like, what we expect school to be like, and what school is like — not just emergency pandemic Zoom-school, but the institution with all its histories and politics and practices and (importantly) variations. Not everywhere looks like Brandeis. Not everyone experiences Brandeis the same way.

Me, I write about education technology for a living. I'm not an advocate for ed-tech; I'm not here to sell you on a new flashcard app or show you how to use the learning management system more effectively. I'm an ed-tech critic. That doesn't mean I just write about how ed-tech sucks — although it mostly does suck — it means that I spend my time thinking through the ways in which technology shapes education and education shapes technology and the two are shaped by ideologies, particularly capitalism and white supremacy. And I do so because I want us — all of us — to flourish; and too often both education and technology are deeply complicit in exploitation and destruction instead.

These are not good times for educational institutions. Many are struggling, as I'm sure you know, to re-open. Many that have re-opened face-to-face are closing and moving online. Most, I'd wager, are facing severe funding shortages — the loss of tuition, dorm-room, and sportsball dollars, for example — as well as new expenses like PPE and COVID testing. Cities, counties, and states are all seeing massive budget shortfalls — and this is, of course, how most public schools (and not only at the K-12 level) are actually funded, not by tuition or federal dollars but by state and local allocations. (That's not to say that the federal government couldn't and shouldn't step up to bail out the public education system.)

Schools have been doing "more with less" for a long while now. Many states had barely returned to pre-recession funding levels before the pandemic hit. And now in-person classes are supposed to be smaller. Schools need more nurses and teaching aides. And there just isn't the money for it. So what happens?

Technology offers itself as the solution. Wait. Let me fix that sentence. Technology companies offer their products as the solution, and technology advocates promote the narrative of techno-solutionism.

If schools are struggling right now, education technology companies — and technology companies in general — are not. Tech companies are dominating the stock market. The top four richest men in the world: all tech executives (Jeff Bezos, Bill Gates, Mark Zuckerberg, and Elon Musk — all of whom are education technology "philanthropists" of some sort as well, incidentally.) Ed-tech companies raised over $800 million in the first half of this year alone. The promise of ed-tech — now as always: make teaching and learning cheaper, faster, more scalable, more efficient. And where possible, practical, and politically expedient: replace expensive, human labor with the labor of the machine. Replace human decision-making with the decision-making of an algorithm.

This is already happening, of course, with or without the pandemic. Your work and your behavior, as students, are already analyzed by algorithms, many of these designed to identify when and if you cheat. Indeed, it's probably worth considering how much the fear of cheating is constitutive of ed-tech — how much of the technology that you're compelled to use is designed because the system — be that the school, the teachers, the structures or practices — do not trust you.

For a long time, arguably the best known anti-cheating technology was the plagiarism detection software TurnItIn. The company was founded in 1998 by UC Berkeley doctoral students who were concerned about cheating in the science classes they taught. In particular, they were particularly concerned about the ways in which they feared students were utilizing a new feature on the personal computer: copy-and-paste. So they turned some of their research on pattern-matching of brainwaves to create a piece of software that would identify patterns in texts. TurnItIn became a huge business, bought and sold several times over by private equity firms since 2008: first by Warburg Pincus, then by GIC, and then, in 2014, by Insight Partners — the price tag for that sale: $754 million. TurnItIn was acquired by the media conglomerate Advance Publications last year for $1.75 billion.

That price-tag should prompt us to ask: what's so valuable about TurnItIn? Is it the size of the customer base — the number of schools and universities that pay to use the product? Is it the algorithms — the pattern-matching capabilities that purport to identify plagiarism? Is it the vast corpus of data that the company has amassed — decades of essays and theses and Wikipedia entries that it uses to assess student work?

TurnItIn has been challenged many times by students who've complained that it violates their rights to ownership of their work. A judge ruled, however, in 2008 that students' copyright was not infringed upon as they'd agreed to the Terms of Service. But that seems a terribly flawed decision, because what choice does one have but to click "I agree" when one is compelled to use a piece of software by one's professor, one's school? What choice does one have when the whole process of assessment is intertwined with this belief that students are cheaters and thus with a technology infrastructure that is designed to monitor and curb their dishonesty?

Every student is guilty until the algorithm proves her innocence.

Incidentally, one of its newer products promise to help students avoid plagiarism, and so essay mills now also use TurnItIn so they can promise to help students avoid getting caught cheating. The company works both ends of the plagiarism market.

Anti-cheating software isn't just about plagiarism, of course. No longer does it just analyze students' essays to make sure the text is "original." There is a growing digital proctoring industry that offers schools way to monitor students during online test-taking. Well-known names in the industry include ProctorU, Proctorio, and Examity. Many of these companies were launched circa 2013 — that is, in the tailwinds of "the Year of the MOOC" — with the belief that an increasing number of students would be learning online and that professors would demand some sort of mechanism to verify their identity and their integrity. According to one investment company, the market for online proctoring was expected to reach $19 billion last year — much smaller than the size of the anti-plagiarism market, for what it's worth, but one that investors see as poised to grow rapidly, particularly in the light of schools' move online because of COVID.

These proctoring tools gather and analyze far more data than just a student's words, than their responses on an exam. They typically require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home.

The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example, or if there are sticky notes on the wall, their "suspicion" score goes up. The algorithm — sometimes in concert with a human proctor — decides who is suspicious. The algorithm decides who is a cheat.

We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.

We do know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this.

Of course, not all surveillance in schools is about preventing cheating; it's not all about academic dishonesty — but it is always, I'd argue, about monitoring behavior and character (and I imagine in this class you are talking about the ways in which institutional and interpersonal assessments of behavior and character are influenced by white supremacy). And surveillance is always caught up in the inequalities students already experience in our educational institutions.

For the past month or so, there's been a huge controversy in the UK over a different kind of algorithmic decision-making. As in the US, schools in the UK were shuttered in the spring because of the coronavirus. Students were therefore unable to sit their A Levels, the exams that are the culmination of secondary education there. These exams are a Very Big Deal — even more so than the SAT exams that many of you probably took in high school. While the SAT exams might have had some influence on where you were accepted — I guess Brandeis is test-optional these days, so nevermind — A Levels almost entirely dictate where students are admitted to university. British universities offer conditional acceptances that are dependent on the students' actual exam scores. So, say, you are admitted to the University of Bristol, as long as you get two As and a B on your A Levels.

No A Levels this spring meant that schools had to come up with a different way to grade students, and so teachers assigned grades based on how well the student had done so far and how well they thought the student would do, and then Ofqual (short for Office of Qualifications and Examinations Regulation), the English agency responsible for these national assessments, adjusted these grades with an algorithm — an algorithm designed in part to avoid grade inflation (which, if you think about it, is just another form of this fear of cheating but one that implicates teachers instead of students).

Almost 40% of teacher-assigned A-Level grades were downgraded by at least one grade. Instead of getting those two As and a B that you expected to get and that would get you into Bristol, the algorithm gave you an A, a B, and a C. No college admission for you.

In part, Ofqual's algorithm used the history of schools' scores to determine students' scores. Let me pause there so you can think about the very obvious implications. It's pretty obvious: the model was more likely to adjust the scores of students attending private schools upward, because students at private schools, historically, have performed much better on their A Levels. (As someone who attended a private school in England, I can guarantee you that it's not that they're smarter.) Ofqual's algorithm adjusted the scores of students attending the most disadvantaged state schools downward, because students at state schools, historically, have not performed very well. (I don't want to get too much into race and class and the British education system, but sufficed to say, about 7% of the country attends private schools and graduates from those schools make up about 40% of top jobs, including government jobs.) Overall, the scores of students in the more affluent regions of London, the Midlands, and Southeast England were adjusted so that that they rose more than the scores of students in the North, which has, for a very long time (maybe always?) been a more economically depressed part of the country.

At first, the British government — which does rival ours for its sheer idiocy and incompetence — refused to admit there was a problem or to change the grades, even arguing there was no systemic bias in the revised exam scores because, according to one official, teachers grade poor students too leniently — something that the algorithm was designed to address. But students took to the streets, chanting "Fuck the algorithm," and the government quickly backed down, fearing that not only might it alienate the youth but also their families. Grades were reverted to those given by teachers, not the algorithm, and university spots were given back to those who'd had their offers rescinded.

I should note here that there was nothing particularly complex about the A-Level algorithm. This wasn't artificial intelligence or complex machine learning that decided students' grades. It was really just a simple formula, probably calculated in an Excel spreadsheet. (That doesn't make this situation any better, of course.)

The A-Level algorithm is part of what Ruha Benjamin calls the "new Jim Code," the racist designs of our digital architecture. And I think what we can see in this example is the ways in which pre-digital policies and practices get "hard-coded" into new technologies. That is, how long-running biases in education — biases about race, ethnicity, national origin, class, gender, religion, and so on — are transfered into educational technologies.

Lest you think that the fiasco in the UK will give education technologists and education reformers pause before moving forward with algorithmic decision-making and algorithmic surveillance, the Gates Foundation last month awarded the Educational Testing Service (which runs the GRE exam) a $460,000 grant to "to validate the efficacy of Automated Essay Scoring software in improving student outcomes in argumentative writing for students who are Black, Latino, and/or experiencing poverty."

A couple of days ago, I saw a series of tweets from a parent, complaining that her junior high school-age son had gone from loving history class to hating it — "tears," "stress," "self-doubt," after the first auto-graded assignment he turned in gave him a score of 50/100. The parent, a professor at USC, showed him how to game the software: write long answers, use lots of proper nouns. His next score was 80/100. An algorithm update one day later: "He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @Edgenuityinc, there's your ticket. He went from an F to an A+ without learning a thing." (Sidenote: in 2016, Alabama state congressperson Mike Hubbard was found guilty of 12 counts of felony ethics violations, including receiving money from Edgenuity. Folks in ed-tech are busy trying to stop students from cheating while being so shady themselves.)

I tweeted in response to the homework algorithm "hack" that if it's not worth a teacher reading the assignment/assessment, then it's not worth the student writing it. That robot grading is degrading. I believe that firmly. (Again, think of that Gates grant. Who has a teacher or peer read their paper, and who gets a robot?) Almost a thousand people responded to my tweet, most agreeing with the sentiment. But a few people said that robot grading was fine, particularly for math and that soon enough it would work in the humanities too. "Manual grading is drudgery that consumes time and energy we could spend elsewhere," one professor responded. And again, I disagreed, because I think it's important to remember, if nothing else, that if it's drudgery for teachers it's probably drudgery for students too. People did not like that tweet so much, and many seemed to insist that drudgery was a necessary part of learning.

And so, there you go. We've taken that drudgery of analog worksheets and we've made that drudgery digital and we call that "progress." Ed-tech promises it can surveil all the clicks and taps that students make while filling out their digital worksheets, calculating how long they spend on their homework, where they were when it was completed, how many times they tabbed out to play a game instead, how their score compares to other students, whether they're "on track" or "falling behind," claiming it can predict whether they'll be a good college student or a good employee. Ed-tech wants to gamify admissions, hiring, and probably if we let it, the school-to-prison pipeline.

I won't say "it's up to you," students, to dismantle this. That's unfair. Rather it is up to all of us, I think — faculty, students, citizens, alike — to chant "Fuck the algorithm" a lot more loudly.


Selling the Future of Ed-Tech (& Shaping Our Imaginations)

$
0
0

I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke briefly tonight in Anna Smith's class on critical approaches to education technology (before a really excellent discussion with her students). I should note that I talked through my copy of The Kids' Whole Future Catalog rather than, as this transcript suggests, using slides. Sorry, that means you don't get to see all the pictures...

Thank you very much for inviting me here today. (And thank you for offering a class on critical perspectives on education and technology!)

In the last few classes I've visited, I've talked a lot about surveillance technologies and ed-tech. I think it's one of the most important and most horrifying trends in ed-tech — one that extends beyond test-proctoring software, even though, since the pandemic and the move online, test-proctoring software has been the focus of a lot of discussions. Even though test-proctoring companies like to sell themselves as providing an exciting, new, and necessary technology, this software has a long history that's deeply intertwined with pedagogical practices and beliefs about students' dishonesty. In these class talks, I've wanted to sound the alarm about what I consider to be an invasive and extractive and harmful technology but I've also wanted to discuss the beliefs and practices — and the history of those beliefs and practices — that might prompt someone to compel their students to use this technology in the first place. If nothing else, I've wanted to encourage students to ask better questions about the promises that technology companies make. Not just "can the tech fulfill these promises?", but "why would we want them to?"

In my work, I write a lot about the "ed-tech imaginary" — that is, the ways in which our beliefs in ed-tech's promises and capabilities tend to be governed as much by fantasy as by science or pedagogy. Test-proctoring software, for example, does not reliably identify cheating — and even if it did, we'd want to ask a bunch of questions why we'd want it or need it.

Arguably one could say something similar about education as a whole: our beliefs about it are often unmoored from reality. Although we all have experiences in the classroom — as students, if nothing else — many of us hold onto some very powerful, fictional narratives about what schools look like, what they looked like in the past, what they will look like in the future.

When I was a kid, my brother had this book called The Kids' Whole Future Catalog. The name was purposefully reminiscent of The Whole Earth Catalog, which was an important counterculture publication from the late 1960s (through the late 1980s) — one that was very influential on the development of early cyberculture. Like The Whole Earth Catalog, The Kids Whole Future Catalog was a mail-order catalog of sorts. Each page had addresses where you could write away for products and infomation. And like The Whole Earth Catalog, The Kids Whole Future Catalog claimed to be inspired by the work of Buckminster Fuller, the well-known futurist and architect. (The inventor of the geodesic dome and author of a series of lectures on the automation of education, among other things.)

The book was published in 1982. I was 11; my brother was 8. 1982 was two years into the Reagan Presidency and two years before the broadcast of The Day After convinced me that there would be no future and that we'd all likely die in a nuclear holocaust. To be honest, I don't actually recall reading the parts of the book that have to do with the future of school. I remember reading the parts that said that in the future insects would be a major source of protein and that in the future we'd either live in space colonies or underground. The future sounded pretty terrible.

The book offered six areas that might signal what "future learning" would look like, some of which were grounded in contemporary developments of the Eighties and some of which sound quite familiar today:

"New Ways to Learn"— "Put down those pencils… close your books… and come visit a new kind of school. It doesn't look like a school at all. There aren't any desks, books, pencils, or paper. Instead, there are games to play, experiments to try, things to climb on and down and through, drawers that say 'open me,' and displays marked 'please touch'. There aren't any teachers to tell you what you're supposed to do — just 'explainers' who walk around answering questions. What would you like to do first?"

I'd ask you to think about these predictions — not just how realistic were they or are they, but who do you think makes these predictions, who do these futures benefit, and who do they ignore?

"Schools on the Move"— "Classes will never be boring on an airship traveling around the world!"

Again, whose future would this be? And is that rubble the airship is sailing over?

"A Robot for a Teacher"This section showcases Leachim, a robot created by Michael Freeman to help his wife, a fourth-grade teacher. Leachim's "computer brain is packed with information from seven textbooks, a dictionary, and a whole set of encyclopedias. He knows all the names of all the children in the class, their brothers and sisters, and their parents and pets. He can talk with them about their hobbies and interests. And he can work with up to five students at once, speaking to each of them separately through headsets and showing them pictures on the Tableau screen to his right."

Freeman actually commodified a version of this teacher robot — the 2-XL, an early "smart toy" that played an 8-track tape with multiple choice and true-false questions.

Let's note the inconsistencies in these predictions. On one hand, there are no more schools. Everything is hands-on learning. On the other hand, robots will still administer students multiple choice tests. Some people simply cannot imagine a world without multiple choice tests.

"Computers Will Teach You… and You Will Teach Computers!"

Again, what happens to human teachers in the future?

"Simulations" — "Beginning pilots can use flight simulators to experience takeoff, flight, and landing without ever leaving the ground. A computer-controlled wide-screen film system gives them the impression that they're flying in all kinds of weather and encountering problems that actual pilots face."

Incidentally, the first flight simulator was invented in 1929. So it's not that the prediction about simulations isn't a good one; it's that to say that this is "the future" is just so historically inaccurate.

"Exploring Inner Space"

This is arguably the weirdest one. The future of learning will be imaging exercises, according to The Kids' Whole Future Catalog, arguably akin to the "mindset" and "social emotional learning" craze of today. Students of the future will learn and work through their dreams, create dream journals. Students will work with all their senses, including psychic experiences. This section includes Masuaki Kiyota, a Japanese psychic known for "thoughtography" — that is, taking pictures of what's in the mind — and Uri Geller, the Soviet psychic would could bend spoons with his mind. Both have since been found to be frauds (although in fairness to Geller, I suppose, he's now described as a magician and not a psychic.) In the future, students will practice clairvoyance, precognition, and telepathy — ironically, these are the sorts of things that surveillance ed-tech like proctoring software promises too.

"Lifelong learning"— "When will you finish your education? When you're 16? 18? 22? 30? People who are asked this question in the future will probably answer 'never.' They'll need a lifetime of learning in order to keep up with their fast-changing world. Instead of working at the same job until retirement, people will change jobs frequently. In between jobs, they'll go to school — but not to the kinds of schools we know today. Grade schools, high schools and colleges will be replaced by community learning centers where people of all ages will meet to exchange ideas and information."

Retirement, according to this section, will happen at age 102. No indication if death occurs at the same time.

A bit of a side-note here: I started down a rabbit hole yesterday, searching for the origins of the phrase "lifelong learning," because once again it's become a phrase you hear all the time from pundits and politicians. It struck me as I was re-reading The Kids' Whole Future Catalog to prepare for class tonight that this push for "lifelong learning" in the 1980s might be connected to Reaganomics, if you will — to the need for job training in a recession, to restrictions to Social Security, to anti-unionization efforts, and to the corporations' lack of commitment to their employees. What then might we make of the ed-tech imaginary when it comes to "lifelong learning" then or now: where did these ideas come from? Who's promoting, who's promoted them? Interestingly, one of the popularizers of "lifelong learning" was UNESCO which promoted "lifelong education" in its 1972 Faure Report. A competing vision of the future of work and education was offered by the OECD. There's a much longer story here that I'll spare you (for now. I'm going to write an essay on this). But it's worth thinking about how powerful organizations push certain narratives about the future — narratives that support their policy and political agendas. (These are the same folks who like to tell you that "65% of children entering school today will eventually hold jobs that don't exist yet" — a claim that is, quite simply, made-up bullshit.)

The Kids' Whole Future Catalog feels a lot like made-up bullshit too. But I don't think that means it's unworthy of critical analysis. Indeed, invented narratives are often rich sources to study.

You can chuckle, I suppose, that a 1982 children's book — "a book about your future" — might be intertwined with narratives and ideology. I admit I wonder who else might have had this guide on their bookshelf. Marc Andreessen, the developer of the first Web browser and now a venture capitalist, was born the same year as me. So was Elon Musk. Larry Page, the co-founder of Google was born in 1973. So was his co-founder Sergei Brin. Jack Dorsey, the CEO of Twitter, was born in 1976. His co-founders, Evan Williams and Biz Stone were born in 1972 and 1974 respectively. Sal Khan, the founder of Khan Academy, was born in 1976. That is to say, there's quite a number of technology entrepreneurs — some with more or less connection to ed-tech — who were raised on a future of learning involving robot teachers and pseudoscience. And that's just tech entrepreneurs. That's not including politicians or filmmakers or professors born in the early 1970s.

Other generations had other touch-points when it comes to the "ed-tech imaginary."

The Jetsons aired an episode with a robot teacher in 1963.

A comic featuring "Push-Button Education" appeared in newspapers in 1958.

Thomas Edison predicted in 1913 that textbooks would soon be obsolete — replaced, surprise surprise, by the films like he himself was marketing.

To commemorate the World's Fair in Paris in 1900, a series of postcards were printed that imagined what the year 2000 would look like. One of these featured the school of the future — boys, wearing headsets, seated in rows of desks. The headsets were connected by wires to a machine into which the teacher fed textbooks. The knowledge from these was ostensibly ground up as another student turned the crank on the machine and then transmitted to the students' brains.

And you can laugh. But this notion that we can send information through wires into people's brains is probably the most powerful and lasting idea about the future of learning:

Think of that scene in The Matrix, where Neo, plugged into a machine so he can be fed educational programming straight into his brainstem: "Whoa. I know Kung Fu."

How many folks think this is what ed-tech is striving for, what the future of learning will be like?

Cheating, Policing, and School Surveillance

$
0
0

I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke this morning to Jeffrey Austin's class at the Skyline High School. Yes, I said "shit" multiple times to a Zoom full of high school students. I'm a terrific role model.

Thank you very much for inviting me to speak to you today. I am very fond of writing centers, as I taught writing — college composition to be exact — for a number of years when I was a graduate student at the University of Oregon. The state of Oregon requires two terms of composition for graduation, thus Writing 121 and 122 were courses that almost all students had to take. (You could test out of the former, but not the latter.) Many undergraduates dreaded the class, some procrastinating until they were juniors or seniors in taking it — something that defeated the point, in so many ways, of helping students become better writers of college essays.

The University of Oregon also required that the graduate students who mostly taught these classes take a year long training in composition theory and pedagogy. It’s still sort of shocking to me how many people receive their PhDs and go on to become professors without taking any classes in how to teach their discipline, but I digress…

I don’t recall spending much time talking about plagiarism as I learned how to teach writing — this was all over twenty years ago, so forgive me — although I am sure the topic came up. We were obligated, the professor who ran the writing department said, to notify her if we found a student had cheated. These were the days before TurnItIn and other plagiarism checking software were widely adopted by schools. (These were the early days of the learning management system software, however, and the UO was an early adopter of Blackboard — that's a different albeit related story.) If we suspected cheating, we had a couple of options. Google a sentence or two to see if the passage could be found online. Ask our colleagues if the argument sounded familiar. Confront the student.

I always found it pretty upsetting when I suspected a student had plagiarized a paper. I tried to structure my assignments in such a way to minimize it — asking students turn in a one word sentence describing their argument and then a draft so that I could see their thinking and writing take shape. I felt as though I’d not done enough to guide a student toward success when they turned to essay mills or friends to plagiarize. I wanted to support students, not police them.

When I talked to students about cheating, I found it was rarely a matter of not understanding how to properly cite their sources. Nor was it really a lazy attempt to get out of doing work. Rather, students were under pressure. They hated writing. They weren't confident they had anything substantive to say. They were struggling with other classes, with time management, with aspects of their lives outside of schooling — jobs, family, and so on. Students needed more support, not more surveillance, which as I find myself repeating a lot these days, is too often confused with care.

I never met a student who was a Snidely Whiplash of cheating, twisting his mustache and rubbing his hands together with glee as he displayed callous and wanton academic dishonesty.

But that’s how so much of the educational software that’s sold to schools to curb cheating seems to imagine students — all students: unrepentant criminals, who must be rooted out and punished. Guilty until TurnItIn proves you innocent.

Let me pause and say that, as I prepared these remarks, I weighed whether or not I wanted to extend the Snidely Whiplash reference and refer to anti-cheating software as the Dudley Do-Right of ed-tech — that bumbling, incompetent Mountie, who was somehow seen as the hero. But I don't really know how familiar you are with the cartoon Dudley Do-Right. And I don't want to upset any Canadians.

I will, however, say this: anti-cheating software, whether it’s plagiarism detection or test proctoring — is “cop shit.” And cops do not belong on school grounds.

I don't know much about your organization. I don't know how controversial those two sentences are: first, my claim that ed-tech is “cop shit,” and second, that police should be removed from schools.

Since the death of George Floyd this summer, the calls to eliminate school police have grown louder. It's been a decades-long fight, in fact, in certain communities, but some schools and districts, such as in Oakland where I live, have successfully started to defund and dismantle their forces. It's an important step towards racial justice in schools, because we know that school police officers disproportionately target and arrest students with disabilities and students of color. Students of color are more likely to attend schools that are under-resourced, for example, without staff trained to respond to behavioral problems. They are more likely to attend a school with a police officer. Black students are twice as likely to be referred to law enforcement by schools as white students. Black girls are over eight times more likely to be arrested at school as white girls. During the 2015-2016 school year, some 1.6 million students attended a school with a cop but no counselor. Defunding the police then means reallocating resources so that every child can have access to a nurse, a counselor — to a person who cares.

But what does this have to do with ed-tech, you might be thinking…

In many ways, education technology merely reinscribes the beliefs and practices of the analog world. It's less a force of disruption than it is a force for the consolidation of power. So if we know that school discipline is racist and ableist, then it shouldn't surprise us that the software that gets built to digitize disciplinary practices is racist and ableist too.

Back in February, Jeffrey Moro, a PhD candidate in English at the University of Maryland, wrote a very astute blog post "Against Cop Shit" in the classroom.

"For the purposes of this post," Moro wrote, "I define 'cop shit' as 'any pedagogical technique or technology that presumes an adversarial relationship between students and teachers.' Here are some examples:

  • ed-tech that tracks our students' every move
  • plagiarism detection software
  • militant tardy or absence policies, particularly ones that involve embarrassing our students, e.g. locking them out of the classroom after class has begun
  • assignments that require copying out honor code statements
  • 'rigor,' 'grit,' and 'discipline'
  • any interface with actual cops, such as reporting students' immigration status to ICE and calling cops on students sitting in classrooms.

The history of some of these practices is quite long. But I think that this particular moment that we're in right now has greatly raised the stakes with regards to the implications of "cop shit" in schools.

In the very first talk I gave during the Trump Administration — just 13 days after his inauguration — I warned of the potential for ed-tech to be used to target, monitor, and profile at-risk students, particularly undocumented and queer students. I didn't use the phrase "cop shit," but I could clearly see how easy it would be for a strain of Trumpism to amplify the surveillance technologies that already permeated our schools. Then, in February 2017, I wanted to sound the alarm. Now, almost four years later, it's clear we need to do more than that. We need to dismantle the surveillance ed-tech that already permeates our schools. I think this is one of our most important challenges in the months and years ahead. We must abolish "cop shit," recognizing that almost all of ed-tech is precisely that.

Why do we have so much "cop shit" in our classrooms, Jeffrey Moro asks in his essay. "One provisional answer is that the people who sell cop shit are very good at selling cop shit," he writes, "whether that cop shit takes the form of a learning management system or a new pedagogical technique. Like any product, cop shit claims to solve a problem. We might express that problem like this: the work of managing a classroom, at all its levels, is increasingly complex and fraught, full of poorly defined standards, distractions to our students' attentions, and new opportunities for grift. Cop shit, so cop shit argues, solves these problems by bringing order to the classroom. Cop shit defines parameters. Cop shit ensures compliance. Cop shit gives students and teachers alike instant feedback in the form of legible metrics."

Ed-tech didn't create the "cop shit" in the classroom or launch a culture of surveillance in schools by any means. But it has facilitated it. It has streamlined it.

People who work in ed-tech and with ed-tech have to take responsibility for this, and not just shrug and say it's inevitable or it's progress or school sucked already and it's not our fault. We have to take responsibility because we are facing a number of crises — some old and some new — that are going to require us to rethink how and why and if we monitor and control teachers and students — which teachers and students. Because now, the “cop shit" that schools are being sold isn't just mobile apps that track whether you've completed your homework on time or that assess whether you cheated when you did it. Now we're talking about tracking body temperature. Contacts. Movement. And as I feared back in early 2017, gender identity. Immigration status. Political affiliation.

Surveillance in schools reflects the values that schools have (unfortunately) prioritized: control, compulsion, distrust, efficiency. Surveillance is necessary, or so we've been told, because students cheat, because students lie, because students fight, because students disobey, because students struggle. Much of the physical classroom layout, for example, is meant to heighten surveillance and diminish cheating opportunities: the teacher in a supervisory stance at the front of the class, wandering up and down the rows of desks and peering over the shoulders of students. (It's easier, I should note, to shift the chairs in your classroom around than it is to shift the code in your software.) And all of this surveillance, we know, plays out very differently for different students in different schools — which schools require schools to walk through metal detectors, which schools call the police for disciplinary infractions, which schools track what students do online, even when they're at home. And nowadays, especially when they're at home.

Last month, officials in Edgewater, Colorado called the cops on a 12 year old boy who held a toy gun during his Zoom class. He was suspended from school.

Digital technology companies like to say that they're increasingly handing over decision-making to algorithms — it's not that the principal called the cops; the algorithm did. Automation is part of the promise of surveillance ed-tech — that is, the automation of the work of disciplining, monitoring, grading. That way, education gets cheaper, faster, better.

We've seen lately, particularly with the switch to online learning, a push for the automation of cheating prevention. Proctoring software is some of the most outrageous "cop shit" in schools right now.

These tools gather and analyze far more data than just a student's responses on an exam. They require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home. Some ask for a tour of the student's room to make sure there aren't "suspicious items" on the walls or nearby.

The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example. The algorithm — sometimes in concert with a human proctor — determines who is a cheat. But more chilling, I think, the algorithm decides who suspicious, what is suspicious.

We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, for example, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.

We know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this, as do technology companies — and this is really important. We can see how the mistrust and loathing of students is part of a proctoring company culture and gets baked into a proctoring company's software when, for example, the CEO posts copies of a student's chat logs with customer service onto Reddit, as the head of Proctorio did in July. (Proctorio is also suing an instructional technologist from British Columbia for sharing links to unlisted YouTube videos on Twitter.)

That, my friends, is some serious "cop shit." If we believe that cops have no business in schools — and the research certainly supports that — then hopefully we can see that neither does Proctorio.

But even beyond that monstrous company, we have much to unwind within the culture and the practices of schools. We must move away from a culture of suspicion and towards one of trust. That will demand we rethink "cheating" and cheating prevention.

Indeed, I will close by saying that — as with so much in ed-tech — the actual tech itself may be a distraction from the conversation we should have about what we actually want teaching and learning to look like. We have to change the culture of schools not just adopt kinder ed-tech. We have to stop the policing of classrooms in all its forms and support other models for our digital and analog educational practices.

Behaviorism Won

$
0
0

I have volunteered to be a guest speaker in classes this term. Yesterday, I talked to the students in Roxana Marachi's educational psychology class at San Jose State.

Thank you very much for inviting me to speak to your class. I will confess at the outset: I've never taken a class in educational psychology before. Never taken any psychology course, for that matter. My academic background, however, is in literature where one does spend quite a bit of time talking about Sigmund Freud. And I wrote my master's thesis in Folklore on political pranks, drawing in part on Freud's book Jokes and Their Relation to the Unconscious. I don't think it's a stretch to argue that Freud is probably one of the most influential theorists of the twentieth century.

A decade ago, I might have said the most influential. But I've been spending the last few years deeply immersed in the work of another psychologist's work, B. F. Skinner. I've read all his books and several books about him; I spent a week at the archives at Harvard, pouring through his letters. Perhaps it's colored my assessment — I'm like that kid in The Sixth Sense except instead of dead people, I see behaviorism everywhere. Okay sure, Skinner's cultural impact might not be as widely recognized as Freud's, but I don't think his importance can be dismissed. He was one of the best known public scholars of his time, appearing on TV shows and in popular magazines, not just at academic conferences and in academic journals. B. F. Skinner was a household name.

It's too easy, I think, to say that Freud and Skinner's ideas are no longer relevant — that psychology has advanced so far in the past century or so and that their theories have been proven wrong. I don't think that's necessarily true. One of the stories that often gets told is that after the linguist Noam Chomsky published two particularly brutal reviews of Skinner's books — a review of Verbal Behavior in 1959 and a review of Beyond Freedom and Dignity in 1971 — everyone seemed to agree behaviorism was wrong, and it was tossed aside for cognitive science. Certainly cognitive science did become more widely adopted within psychology departments starting in the 1960s and 1970s, but the reasons the field turned away from behaviorism were much more complicated than a couple of book reviews. And outside of academic circles, there were other factors too that diminished Skinner's popularity. The film A Clockwork Orange, for example, probably did much more to shape public opinion about behavior modification than anything else. In 1974, the Senate Judiciary Committee issued a report on the use of behavior modification as there was growing concern about the ways in which these were being used in federally-funded programs, including prisons and schools. In 1972, the public learned about the Tuskegee Experiment, a blatantly racist and decades-long study of the effects of untreated syphilis on African American men. People became quite wary of the use of humans in research experiments — medical and psychological, and the National Research Act was signed by President Nixon, establishing institutional review boards that would examine the ethical implications of research.

But behaviorism did not go away. And I'd argue that didn't go away because of the technologies of behavior that Skinner (and his students) promulgated.

There's a passage that I like to repeat from an article by historian of education Ellen Condliffe Lagemann:

I have often argued to students, only in part to be perverse, that one cannot understand the history of education in the United States during the twentieth century unless one realizes that Edward L. Thorndike won and John Dewey lost.

I'm guessing you know who these two men are, but I'll explain nonetheless: Edward L. Thorndike was an educational psychology professor at Columbia University who developed his theory of learning based on his research on animal behavior – perhaps you've heard of his idea of the "learning curve," the time it took for animals to escape his puzzle box after multiple tries. And John Dewey was a philosopher whose work at the University of Chicago Lab School was deeply connected with that of other social reformers in Chicago – Jane Addams and Hull House, for example. Dewey was committed to educational inquiry as part of democratic practices of community; Thorndike's work, on the other hand, happened largely in the lab but helped to stimulate the growing science and business of surveying and measuring and testing students in the early twentieth century. You can think of the victory that Condliffe Lagemann speaks of, in part, as the triumph of multiple choice testing over project-based inquiry.

Thorndike won, and Dewey lost. You can't understand the history of education unless you realize this. I don't think you can understand the history of education technology without realizing this either. And I'd go one step further: you cannot understand the history of education technology in the United States during the twentieth century – and on into the twenty-first – unless you realize that Seymour Papert lost and B. F. Skinner won.

I imagine you'll touch on Papert's work in this course too. But a quick introduction nonetheless: he was a mathematician and computer scientist and a student of Jean Piaget — another key figure in educational psychology. Papert was one of the founders of constructionism, which builds on Piaget's theories of constructivism — that is, learning occurs through the reconstruction of knowledge rather than a transmission of knowledge. In constructionism, learning is most effective when the learner constructs something meaningful.

Skinner won; Papert lost. Thorndike won; Dewey lost. Behaviorism won.

It seems to really bother folks when I say this. It's not aspirational enough or something. Or it implies maybe that we've surrendered. Folks will point to things like maker-spaces to argue that progressive education is thriving. But I maintain, even in the face of all the learn-to-code brouhaha, that multiple choice tests have triumphed over democratically-oriented inquiry. Indeed, when we hear technologists champion "personalized learning," it's far more likely that what they envision draws on Skinner's ideas, not Dewey's.

In education technology circles, Skinner is perhaps best known for his work on teaching machines, an idea he came up with in 1953, when he visited his daughter's fourth grade classroom and observed the teacher and students with dismay. The students were seated at their desks, working on arithmetic problems written on the blackboard as the teacher walked up and down the rows of desks, looking at the students' work, pointing out the mistakes that she noticed. Some students finished the work quickly, Skinner reported, and squirmed in their seats with impatience waiting for the next set of instructions. Other students squirmed with frustration as they struggled to finish the assignment at all. Eventually the lesson was over; the work was collected so the teacher could take the papers home, grade them, and return them to the class the following day.

"I suddenly realized that something must be done," Skinner later wrote in his autobiography. This classroom practice violated two key principles of his behaviorist theory of learning. Students were not being told immediately whether they had an answer right or wrong. A graded paper returned a day later failed to offer the type of positive behavioral reinforcement that Skinner believed necessary for learning. Furthermore, the students were all forced to proceed at the same pace through the lesson, regardless of their ability or understanding. This method of classroom instruction also provided the wrong sort of reinforcement – negative reinforcement, Skinner argued, penalizing the students who could move more quickly as well as those who needed to move more slowly through the materials.

So Skinner built a prototype of a mechanical device that he believed would solve these problems – and solve them not only for a whole classroom but ideally for the entire education system. His teaching machine, he argued, would enable a student to move through exercises that were perfectly suited to her level of knowledge and skill, assessing her understanding of each new concept, and giving immediate positive feedback and encouragement along the way. He patented several versions of the device, and along with many other competitors, sought to capitalize what had become a popular subfield of educational psychology in the 1950s and 1960s: programmed instruction.

The teaching machine wasn't the first time that B. F. Skinner made headlines – and he certainly make a lot of headlines for the invention, in part because the press linked his ideas about teaching children, as Skinner did himself no doubt, to his research on training pigeons. "Can People Be Taught Like Pigeons?" Fortune magazine asked in 1960 in a profile on Skinner and his work. Skinner's work training a rat named Pliny had led to a story in Life magazine in 1937, and in 1951, there were a flurry of stories about his work on pigeons. (The headlines amuse me to no end, as Skinner was a professor at Harvard by then, and many of them say things like "smart pigeons attend Harvard" and "Harvard Pigeons are Superior Birds Too.")

Like Edward Thorndike, Skinner worked in his laboratory with animals (at first rats, then briefly squirrels, and then most famously pigeons) in order to develop techniques to control behavior. Using a system of reinforcements – food, mostly – Skinner was able to condition his lab animals to perform certain tasks. Pliny the Rat "works a slot machine for living," as Life described the rat's manipulation of a marble; the pigeons could play piano and ping pong and ostensibly even guide a missile towards a target.

In graduate school, Skinner had designed an "operant conditioning chamber" for training animals that came to be known as the "Skinner Box." The chamber typically contained some sort of mechanism for the animal to operate – a plate for a pigeon to peck (click!), for example – that would result in a chute releasing a pellet of food.

It is perhaps unfortunate then that when Skinner wrote an article for Ladies Home Journal in 1945, describing a temperature-controlled, fully-enclosed crib he'd invented for he and his wife's second child, that the magazine ran it with the title "Baby in a Box." (The title Skinner had given his piece: "Baby Care Can Be Modernized.")

Skinner's wife had complained to him about the toll that all the chores associated with a newborn had taken with their first child, and as he wrote in his article, "I felt that it was time to apply a little labor-saving invention and design to the problems of the nursery." Skinner's "air crib" (as it eventually came to be called) allowed the baby to go without clothing, save the diaper, and without blankets; and except for feeding and diaper-changing and playtime, the baby was kept in the crib all the time. Skinner argued that by controlling the environment – by adjusting the temperature, by making the crib sound-proof and germ-free – the baby was happier and healthier. And the workload on the mother was lessened – "It takes about one and one-half hours each day to feed, change, and otherwise care for the baby," he wrote. "This includes everything except washing diapers and preparing formula. We are not interested in reducing the time any further. As a baby grows older, it needs a certain amount of social stimulation. And after all, when unnecessary chores have been eliminated, taking care of a baby is fun."

As you can probably imagine, responses to Skinner's article in Ladies Home Journal fell largely into two camps, and there are many, many letters in Skinner's archives at Harvard from magazine readers. There were those who thought Skinner's idea for the "baby in a box" bordered on child abuse – or at the least, child neglect. And there were those who loved this idea of mechanization – science! progress! – and wanted to buy one, reflecting post-war America's growing love of gadgetry in the home, in the workplace, and in the school.

As history of psychology professor Alexandra Rutherford has argued, what Skinner developed were "technologies of behavior." The air crib, the teaching machine, "these inventions represented in miniature the applications of the principles that Skinner hoped would drive the design of an entire culture," she writes. He imagined this in his novel Walden Two, a utopian (I guess) novel in which he envisaged a community that had been socially and environmentally engineered to reinforce survival and "good behavior." But this wasn't just fiction for Skinner; he designed technologies that would improve human behavior, he argued – all in an attempt to re-engineer the entire social order and to make the world a better place.

"The most important thing I can do," Skinner famously said, "is to develop the social infrastructure to give people the power to build a global community that works for all of us," adding that he intended to develop "the social infrastructure for community – for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all."

Oh wait. That wasn't B. F. Skinner. That was Mark Zuckerberg. My bad.

I would argue, in total seriousness, that one of the places that Skinnerism thrives today is in computing technologies, particularly in "social" technologies. This, despite the field's insistence that its development is a result, in part, of the cognitive turn that supposedly displaced behaviorism.

B. J. Fogg and his Persuasive Technology Lab at Stanford is often touted by those in Silicon Valley as one of the "innovators" in this "new" practice of building "hooks" and "nudges" into technology. These folks like to point to what's been dubbed colloquially "The Facebook Class" – a class Fogg taught in which students like Kevin Systrom and Mike Krieger, the founders of Instagram, and Nir Eyal, the author of Hooked, "studied and developed the techniques to make our apps and gadgets addictive," as Wired put it in a recent article talking about how some tech executives now suddenly realize that this might be problematic.

(It's worth teasing out a little – but probably not in this talk, since I've rambled on so long already – the difference, if any, between "persuasion" and "operant conditioning" and how they imagine to leave space for freedom and dignity. Rhetorically and practically.)

I'm on the record elsewhere arguing this framing – "technology as addictive" – has its problems. Nevertheless it is fair to say that the kinds of compulsive behavior that we display with our apps and gadgets is being encouraged by design. All that pecking like well-trained pigeons.

These are "technologies of behavior" that we can trace back to Skinner – perhaps not directly, but certainly indirectly due to Skinner's continual engagement with the popular press. His fame and his notoriety. Behavioral management – and specifically through operant conditioning – remains a staple of child rearing and pet training. It is at the core of one of the most popular ed-tech apps currently on the market, ClassDojo. Behaviorism also underscores the idea that how we behave and data about how we behave when we click can give programmers insight into how to alter their software and into what we're thinking.

If we look more broadly – and Skinner surely did – these sorts of technologies of behavior don't simply work to train and condition individuals; many technologies of behavior are part of a broader attempt to reshape society. "For your own good," the engineers try to reassure us. "For the good of the global community," as Zuckerberg would say. "For the sake of the children."

Ed-Tech and Trauma

$
0
0

Here are my remarks today from a Contact North webinar with Paul Prinsloo: "Why Technology is Not the Answer."

So I want to apologize at the outset for being a bit unprepared for today's webinar. As you may well know, things have been a bit of a mess in the US lately — I mean, for at least the past four years, probably longer. But certainly for the past few months, weeks, and days. I started to prepare my remarks on Tuesday — election day in the US. As it stands, two days later, we still do not know the winner of the Presidential race. We do not know what Donald Trump will do with the 75 days he has left in office — hopefully he's on his way out, my god.

I gave my first keynote of the Trump era in February 2017, less than two weeks after his inauguration. I had a foreboding feeling about what his Presidency would hold — for the American education system and ed-tech most certainly, but for every sector quite frankly, and for the health and wellbeing of everyone in the world. At the time, I wanted to caution people about the ways in which education data might be weaponized by the Trump administration. It's been clear for decades now that Trump is a eugenicist. And I feared for immigrant students, queer students, and students of color in particular. I don’t think I was wrong to worry. If nothing else, as I said in my last Contact North webinar, we have seen ed-tech surveillance expand greatly in the last few years; and as we know, surveillance harms rather than protects. It disproportionately harms students already vulnerable, already struggling. But it also grooms all students for a lifetime of surveillance — at work and increasingly at home.

I knew, when Trump was elected, that the four years to follow would be difficult, particularly for those who worked in and attended schools. An administration that opposes science and undermines facts and trades in racist conspiracy theories is no friend to academia, no friend to scholarship. Trumpism is an epistemic crisis, and our institutions — all of them, not just educational ones — are weakened. They struggle to respond.

And look at us now. Over 47 million coronavirus cases and over 1.2 million deaths worldwide, with the US leading in cases and in deaths. I think there are many things we can discuss today — and I hope we can open the floor for Q&A quickly — but I just want to recognize the incredible and awful trauma that everyone has experienced, that many are still experiencing in many parts of the world. There are almost a quarter of a million dead in the US alone — a figure that is surely a vast undercount of the number of people whose lives have been lost directly or indirectly to the pandemic. A pandemic that, in the US at least, rages totally out of control. Few people are untouched by this crisis. Few students. Few teachers. Few staff. And as we talk about the future — whether it's planning for next semester or next year or beyond — I think we do an immense disservice to ourselves, to our shared humanity, if we fail recognize the trauma. We cannot "build back better" to borrow Joe Biden's campaign slogan if we do not stop to grieve and to heal.

I hope you'd agree that addressing the loss and trauma of the Trump Presidency, of COVID-19 is not a technological problem — I guess we can debate this. But when I tune into so many of the discussions about the present and the future of education, almost all I hear is chatter about technology. How to improve Zoom sessions, how to use email for asynchronous teaching, how to run assessments with or without online proctoring software, and so on. I get it — educational technologists are gonna ed-tech. But I feel like so much of this focus on the technology and even on the digital pedagogies that accompany it ignores the lived experiences of so many of us. It's largely an attempt to move offline education online, and in doing so to replicate traditional classroom practices. But far too often, I fear, this replication ignores or worse perpetuates trauma.

Here's my takeaway from today (I hope): To fail to address the trauma will leave us — individually, institutionally — vulnerable to a further erosion of trust and care. It is imperative that, long before we talk about the gadgetry that might comprise the future of education, we address the loss and the violence that is happening in education right now.

We know that students are experiencing acute trauma — illness, homelessness, hunger, threats of violence, threats of deportation, financial precarity, racism, homophobia, and ecological disasters — and they have been well before the pandemic upended any modicum of stability they might have had. (I should add that many staff and precarious faculty members are experiencing this too.) We know that these forms of trauma affect students' behavior, cognition, relationships, and feelings of self-worth. We know that school can cause and exacerbate trauma. We know pedagogical practices, school policies, and indeed the curriculum itself can traumatize. We know ed-tech is unlikely to ameliorate any of this, and is just as likely to make things worse.

I know that people bristle when I say this: "ed-tech is just as likely to make things worse." I think we like to think of new technology as "progress," and then we confuse that with progressive pedagogy and progressive politics. But ed-tech isn't necessarily progressive pedagogically or politically. I make a book length argument elsewhere that much of ed-tech is built on behaviorism, and its most famous advocate, you'll recall, B. F. Skinner famously did not believe in freedom. When it's built to serve oppressive pedagogies and discriminatory institutions — when it's built with a belief that students shouldn't have agency but rather should be engineered and optimized, then ed-tech, as the title of this webinar suggests, is not the answer.

I'd say that ed-tech is not even the right question.

A week or so, I was contacted by a reporter from a major US newspaper who wanted to talk to me about the future of AI in education. I get these sorts of media inquiries a lot, and I know that I have a particular role to play in how journalists plan to shape their stories. I'm there for "balance," to offer a critical perspective that runs counter to the promises and the hype that the ed-tech CEOs or their spokespeople advance. Such was the case this time. The future of AI in education was bright according to two ed-tech companies. The reporter wanted me to push back and say something about privacy, security, and algorithmic bias. I don't think I was a good interviewee because I wasn't offering her the sound-bites she wanted. I mean, sure I can speak to all of that. I can talk about the vast data extraction of education technologies, the shoddy security practices of companies and schools, the ways in which algorithms discriminate and obscure rather than enhance decision-making. But I wanted to complicate the reporter's story — I was in a mood, I guess. I wanted to challenge her assumptions that education would necessarily become more technological, that artificial intelligence would necessarily provide students and teachers and schools anything new, let alone good. But mostly, I didn't want to talk about the tech — or not, at least, how tech is typically defined.

I guess I should have said this at the outset. But I often cite the work of physicist Ursula Franklin who spoke of technology as a practice: "Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters," she wrote. "Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset." "Technology also needs to be examined as an agent of power and control," Franklin insisted, and her work highlighted "how much modern technology drew from the prepared soil of the structures of traditional institutions, such as the church and the military." She could have certainly included the university there.

In my interview with that reporter, I wanted to talk not just "is the tech good" or "does the tech even work" but about the politics, about the ideology of ed-tech, and about the practices and systems of schooling — why do we value personalization and efficiency, for example. What does it mean for institutions that already rely so heavily on precarious labor to adopt more "labor-saving" software. What are the practices that are being automated and why? Ed-tech doesn't just emerge out-of-nowhere. Ed-tech is built on that "prepared soil," as Franklin put it.

The reporter asked me "what if we could build an AI that didn't have any privacy or security issues, that didn't have any bias?" And I argued with her that that was absolutely the wrong way to think about this. What if, for example, someone built an online proctoring tool that was bias-free, privacy-respecting, and absolutely secure? Well, I'd say that it would be impossible, but sure, okay. What if? It would still be a terrible idea because online proctoring is carceral pedagogy — that is, a pedagogy that draws on beliefs and practices that echo those of prisons — surveillance, punishment, and too often literal incarceration.

Carceral pedagogy is the antithesis of education as a practice of freedom. And carceral pedagogy is deeply traumatizing. We have heard over and over and over the stories of students deeply traumatized by online test proctoring — by its judgments about their facial expressions and movements and skin color.

And we come back to my first and what I hope my most important point: we have to address the trauma, the grief, and the loss that we have all experienced (that we continue to experience). And we cannot do that with carceral pedagogy. We cannot do that with carceral ed-tech.

One more point, I guess, before we turn to the discussion. If "technology" is not the answer, then I'm happy to say "more money" sure could get us closer to one. That we have starved our public school systems has only served to make them more unjust, more ruthless. That said, even if we fully fund education, if we make sure that working for universities is financially sustainable and that attending university is free, then we still have so much to do to reshape these institutions and their practices and to end the trauma they've inflicted for centuries now.

What Happens When Ed-Tech Forgets? Some Thoughts on Rehabilitating Reputations

$
0
0

I was a guest today in Chris Hoadley's NYU class on ed-tech and globalization. Here's a bit of my rant...

Thank you so much for inviting me to speak to you today. I have been really stumped as to what I should say. If you look at the talks I've given this year — and I've done quite a lot since I've volunteered to visit Zoom school and speak to classes — there are a couple of notable themes: behaviorism and surveillance. I could talk about both of these for hours, and I want to leave plenty of time after I rant at you for a bit for us to maybe tackle some of these issues. It’s worth noting that these were things I talked about before the pandemic — behaviorism, surveillance, and trauma — but many folks seem a lot more amenable to hear me now. Unlike previous moments when ed-tech was in the spotlight — notably in 2012, "the year of the MOOC" — I am now inundated with media requests to talk about the drawbacks and the dangers about the move online, particularly as it relates to online test-proctoring companies, at least one of which is proving to be as villainous a character in ed-tech circles as we've seen since (perhaps) Blackboard.

One of the things I have written about quite a bit is this idea of "ed-tech amnesia" — that is, this profound forgetting if not erasure of the history of the field. And I don’t just mean forgetting or erasing what happened in the 1950s or 1980s. I mean forgetting what happened five, ten years ago. Some of this is a result of an influx of Silicon Valley types in recent years — people with no ties to education or education technology who think that their ignorance and lack of expertise is a strength. And it doesn't help, of course, that there is, in general, a repudiation of history within Silicon Valley itself. Silicon Valley's historical amnesia — the inability to learn about, to recognize, to remember what has come before — is deeply intertwined with the idea of "disruption" and its firm belief that new technologies are necessarily innovative and are always "progress." I like to cite, as an example, a New Yorker article from a few years ago that interviewed Anthony Levandoski, the Uber engineer sued by Google for stealing its self-driving car technology. "The only thing that matters is the future," Levandoski told the magazine. "I don't even know why we study history. It's entertaining, I guess — the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn't really matter. You don't need to know that history to build on what they made. In technology, all that matters is tomorrow." (If this were a literature class, I would tie this attitude to the Italian Futurists and to fascism, but that’s a presentation for another day.)

There are other examples of this historical amnesia in ed-tech specifically, no doubt. Narratives about the “factory model of education” and whatnot. Some of these other examples appear in the introduction of my forthcoming book, which I won't spoil since I have to save a chapter like that for the book tour — if we can do book tours.

I want us to be vigilant about this amnesia, in no small part because I think it's going to be wielded — I use that verb because I'm thinking here of that little flashy light that Will Smith had in Men in Black — in the coming months and years as many people want us to forget their mistakes, as they try to rehabilitate not just their bad ideas but their very reputations. By "many people," of course I mean Jared and Ivanka. But I also mean any number of people in education and education technology, who've not only screwed up the tools and practices of pandemic teaching and learning today, but who have a rather long history of bad if not dangerous ideas and decisions. These are people who have done real, substantive damage to students, to teachers, to public education. We cannot forget this.

We already have, of course.

Remember AllLearn? (I'm guessing not. There's not even a Wikipedia entry. We've just memory-holed it.) It was a joint online education project founded by Yale, Stanford, and Oxford in 2000 that had over $12 million in investment and created over 100 courses. (Do the math there on the per course costs.) It closed some six years later. AllLearn was short for Alliance for Lifelong Learning. The pitch was that it would provide digital courseware from "the world's best universities" to those university alumni and to the public. The former would pay $200 a course; the latter $250. The Chair of AllLearn was also the head of Yale University at the time: Richard Levin. Despite the failure of AllLearn, in 2014, Levin was named the CEO of Coursera. (His Wikipedia entry also fails to mention AllLearn.)

AllLearn wasn't the only online education failure of the early 2000s, of course. Columbia University invested $30 million into its own online learning initiative, Fathom, that opened in 2000 and closed in 2003. Fathom, for its part, does have a Wikipedia entry. There, you can learn that this initiative was headed by one Michael M. Crow, who is now the President of Arizona State University and according to plenty of education reformers, the visionary behind "the new American university" — one whose interests are not those of the public, I'd say, but rather those of industry. (Crow's Wikipedia entry, for what it's worth, does not mention Fathom either. It does mention that he's the chairman of the board of In-Q-Tel, the investment arm of the CIA.)

I talk a lot about the problems of industry when it comes to ed-tech — how venture capital and venture philanthropy have enormous influence on shaping the direction of education policy. But we should recognize too that the call, if you will, is also coming from inside the house. Terrible ed-tech isn't simply something that's imposed onto universities from the outside; it's something that certain folks on the inside and certain institutions in particular are readily promoting, designing, and adopting. The learning management system, for example, originated at universities. (We can debate which one. You can trace the LMS to PLATO at the University of Illinois Urbana Champaign, for example, or you can trace it to CourseInfo at Cornell.) Plagiarism detection software originated at universities. (TurnItIn was founded at UC Berkeley.) And online test proctoring software has roots at universities as well. (ProctorU was founded at Andrew Jackson University. Proctorio was founded at Arizona State.)

Online test proctoring is pretty abhorrent. We're quite literally asking students to install spyware on their machines. This spyware extracts an incredible amount of information from students, including their biometric data, audio, and video, and then runs it through proprietary algorithms designed to identify suspicious behavior that might signal cheating. I don't think I need to detail to this audience why this is a bad idea technically and a bad idea politically and a bad idea pedagogically.

It's been fascinating, I think, to see the media pick up on this story, because for far too long critiques of ed-tech have been overwhelmed by the sheer volume of hype, overpromising, and marketing fluff. But I want to call out Proctorio in particular in this talk because this company has demonstrated it has no business in schools; its products have no business in classrooms. Online test proctoring is, as PhD student Jeffrey Moro has called it, "cop shit," — that is, "any pedagogical technique or technology that presumes an adversarial relationship between students and teachers." Cop shit supposedly brings order to the classroom by demanding compliance. Cop shit, like "broken windows policing," takes the immense amount of data that schools and ed-tech collect about students and uses that to identify potential criminal behavior — cheating and otherwise. Cop shit relies on carceral pedagogy (and carceral ed-tech), which stands in complete opposition to any sort of liberatory practice of teaching and learning. It stands in complete opposition to education as a practice of care and justice.

But Proctorio has taken its cop shit one step further, invoking the law to threaten students and university staff who challenge them. Proctorio is currently suing Ian Linkletter, an instructional technologist at the University of British Columbia, for infringing on its intellectual property rights. A critic of the company, Linkletter posted links to unlisted YouTube videos — that is, publicly available information — on Twitter. The company has also insinuated they might take legal action against an academic journal that published an article critical of online test proctoring. Proctorio also filed a DMCA takedown notice against a Miami University student who'd posted snippets of Proctorio's Google Chrome extension onto Twitter and who raised questions about some of the claims the company was making about its product. Proctorio's CEO, Mike Olson posted a student's private chat logs with the company's customer support to Reddit after the student complained about the product. What kind of leader does that? What kind of company culture sanctions that?

Proctorio has demonstrated again and again and again and again and again and again that it holds students and staff in deep disdain. It has demonstrated that it will bully people to get its way — to maintain and expand its market share, to spread the adoption of "cop shit." Let's not forget that.

For a long time, arguably Blackboard was one of the major ed-tech villains. I mean, nobody is particularly fond of the learning management system as a piece of ed-tech, but the LMS is not so much evil as it is insidiously unimaginative. Blackboard, however, really upset folks in 2006 when it filed a patent infringement lawsuit against its competitor Desire2Learn (D2L), one day after receiving the patent for "Internet-based education support system and methods." As I mentioned earlier, one can trace the origins of the learning management system and to "Internet-based education support system and methods" to much earlier technologies, including the PLATO system at the University of Illinois in the 1960s. But Blackboard filed the patent; and Blackboard decided to be the patent bully. What kind of leader does that? What kind of company culture sanctions that? Blackboard won its lawsuit against D2L, although after several years of legal wrangling, the patent office eventually rescinded some 44 IP claims made by Blackboard, and the two LMS companies announced in 2009 that they'd settled all the litigation between them. Nevertheless, this left a bitter taste in a lot of folks' mouths. We'll never forget, some said.

But guess who's back? Michael Chasen, one of the co-founders of Blackboard and its CEO from 1999 to 2012. He's launched a startup that offers a layer on top of Zoom to make it work "better" for schools — offering things like attendance, proctoring, and eye-tracking. And guess who else is back? Coursera founder Daphne Koller. She and her husband have launched a startup that also offers a replacement to Zoom. Just like Richard Levin did when he was appointed the CEO of Coursera, these folks are going to claim that they have deep experience with online education, but we might want to balk at that because they've never demonstrated any willingness to learn from the mistakes they have made in the past.

It makes me rather depressed to say I gave a talk six years ago called "Un-Fathom-able: The Hidden History of Ed-Tech" where I touched on some of these very same themes, these very same stories. I called it "Un-Fathom-able," thumbing my nose at the failures of Columbia University's Dot Com era disaster Fathom, sure, but also at what I knew at the time — 2014! — we'd see as the failure of Coursera. There wasn't a sustainable business model for AllLearn and there wasn't a sustainable business model (at the outset at least) for Coursera. (I'm not sure that there is one quite yet, although the company has ditched any pretense of "free and open" once heralded as the great innovation of the MOOC.)

Unfathomable. Impenetrable. Incomprehensible. Inexplicable. Unknowable. There's so often this hand-waving in the face of grave mistakes in ed-tech that no one could have possibly predicted, no one could have possibly known. But people did predict. People did know. That expertise, however, was dismissed; experiences were forgotten; reputations were rehabilitated without any reflection or humility.

This pandemic has given us a pretty pivotal moment for educational institutions, one in which we have to decide what we want school to do, to look like, whose values should it represent and carry forward. But I'd argue we won't be able to move forward with any sort of progressive politics or progressive pedagogy or progressive university mission until we reconcile where we've been before. We can't move forward towards any semblance of educational justice, until there is reconciliation and repair to the harm that ed-tech and it's proponents have caused. We will move forward if we just forget. We'll just keep getting the LMS and expensive video lectures and "cop shit" repackaged and sold to us as innovation.

Behaviorism, Surveillance, and (School) Work

$
0
0

I was a speaker today at the #AgainstSurveillance teach-in, a fundraiser for Ian Linkletter who is being sued by the online test-proctoring software company Proctorio.

I am very pleased but also really outraged to be here today to help raise money for Ian Linkletter's defense and, more broadly, to help raise awareness about the dangers of ed-tech surveillance. It's nice to be part of an event where everyone is on the same page — politically, pedagogically — and I needn't be the sole person saying "hey wait, folks. This ed-tech stuff is, at best, snake oil and, at worst, fascist."

The challenge, on the other hand, is to not simply repeat the things that Sava, Maha, Benjamin, Chris, and Jesse have already said. I am lucky that these five are not just colleagues but dear friends, and the love and support they have shown me and the solidarity that all of you show today give me great hope that we can build better educational practices and that we aren't stuck with snake oil or fascism.

I will say this, even if it's been stated and restated a dozen or more times today: test proctoring is exploitative and extractive. It is harmful to all students, but particularly to those who are already disadvantaged by our institutions. To adopt test proctoring software is to maintain a pedagogical practice based on mistrust, surveillance, and punishment. To adopt test proctoring software is to enrich an unethical industry. To adopt Proctorio in particular is to align oneself with a company that has repeatedly demonstrated that it sees students, teachers, and staff as its enemy, a company that has no respect for academic freedom, for critical inquiry, or for the safety and well-being of the community that it purport to serve.

When we talk about surveillance and ed-tech, much of the focus — and rightly so — is on how this affects students. As we gather here today to raise funds for Ian, a learning technology specialist at UBC, we have to recognize, no doubt, how much surveillance and ed-tech affects teachers and staff as well.

This should not come as a surprise. A fair amount of education technology comes from what we call "enterprise technology"— that is, the kinds of software adopted by large corporations and government entities. Zoom, for example, was not designed for classroom use (although Stanford University was, interestingly, its first paying customer); it was designed to facilitate remote business meetings. (And I should add that it was designed rather poorly for even that. This spring, when schools and workplaces turned to videoconferencing tools en masse, the CEO of Zoom admitted that he'd never really thought about privacy and security issues on the platform. He couldn't fathom why someone would "zoom bomb" a class.)

Enterprise technology is utterly committed to monitoring and controlling workers, although I think many white-collar professionals imagine themselves laboring outside these sorts of constraints — much like professors seem to imagine that the plagiarism and proctoring software is only interested in their students' work, not their own. Microsoft and Google offer schools "productivity tools," for example, making it quite clear — I would hope — that whatever students or staff type or click on feeds into financial measurements of efficiency. (Quite literally so in the case of Microsoft, which now offers managers a way to measure and score workers based on how often they email, chat, or collaborate in Sharepoint. In ed-tech circles, we call this "learning analytics," but it's less about learning than it is about productivity.)

The learning management system, as the name suggests, is as much a piece of enterprise software as it is educational software. It too is committed to measuring and scoring based on how often its users email, chat, or collaborate. That word "management" is a dead giveaway that this software is designed neither for students nor for workers. The LMS is an administrative tool, designed at the outset to post course materials online in such a way that the institution, not the open web, could control access.

The LMS monitors "student performance" and productivity, to be sure. But it can readily be used to surveil instructors as well — to track which graduate students are on strike based on their log-ins, for example, to track how many hours teachers are working during the pandemic. Real world examples, not hypotheticals here. The co-founders of Blackboard, incidentally, are back with a new startup: an administrative layer on top of Zoom to make it work "better" for school, they say, with features like attendance-taking, test-proctoring, and eye-tracking. Congratulations to education technology for taking a surveillance tool designed for the office and making it even more exploitative when turning it on students and staff.

I want to pause here to make an important point: too often, when it comes to education technology — all technology, to be honest — we get really caught up in the material (or digital) object itself. We talk about "the tool" as though it has agency and ideas all its own, rather than recognizing that technology is part of the broader practices, systems, and ideologies at play in the world around us. Technologies have histories. The LMS has a history — one bound up in the technological constraints of student information systems in the late 1990s and the ed-tech imagination about what sort of web access students should (or shouldn't) have. That history gets carried forward today — analog beliefs and practices get "hard-coded" into software, creating new pedagogical constraints that students and teachers must work with. Technology never arrives "all of a sudden," even though stories about tech and ed-tech are prone to ignore history to highlight some so-called innovation. Proctorio did not emerge, fully-formed, out of Mike Olson's head — as the goddess of wisdom emerged out of Zeus's, but here with less wisdom and more litigiousness. Online test proctoring has a history; it has ideology. Indeed, test-taking itself is a cultural construct, embedded into longstanding beliefs and practices about schooling — what we test, how we test, "academic integrity," cheating, and so on. And the whole idea that there is rampant cheating is a nifty narrative, one that's been peddled for ages and is truly the philosophical cornerstone for how much of ed-tech gets designed.

Surveillance didn't just appear in educational institutions with the advent of the computer. So many of the deeply held beliefs and practices of education involve watching, monitoring, controlling. Watching, monitoring, controlling children and adults, students and teachers. And I'd argue that watching, monitoring, and controlling are fundamentally behaviorist practices.

We can think about the ways in which education technology has emerged from workplace technologies — as well as the ways in which schools adopt the language and culture of "productivity"— in order to control our bodies, our minds, our time, our behaviors. But much of the labor of education is reproductive labor — teaching and learning as the reproduction of knowledge and culture, teaching and learning as acts of care. What does it mean to bring surveillance and behaviorism to bear down on reproductive labor?

Wired ran one of those awful "future of work" stories last week. Something about how too many meetings make us unhappy and unproductive and therefore artificial intelligence will "optimize" all this by scheduling our appointments for us, by using facial recognition and body language-reading software to make sure we're paying attention and staying on track and maximizing our efficiency. The underlying message: more surveillance will make us better workers.

It's not hard to see how this gets repackaged for schools.

But I'm interested not just in this question of productive labor but reproductive. These are sort of loosely formed ideas so bear with me if it's not fully baked. What does it mean to automate and algorithmically program reproductivity. What does it mean, furthermore, when the demands of the workplace and the demands of school enter the site more commonly associated with reproductive labor — that is, the home.

Of course, we have for almost one hundred years now been developing technologies to and telling stories about the necessity of automating the home. That's this other influence that joins enterprise tech: consumer tech.

Let me tell you a quick story that bridges my book Teaching Machines with the next product I'm working on — this one on the history of the baby monitor (I mean, we have a thing called "baby monitor" and we have bought this product for almost a century now and all of a sudden folks want to talk about how children are so heavily surveilled and good lord, read some history). Anyway, let's talk about B. F. Skinner's efforts to develop a behaviorist technology for child-rearing: the "air crib."

Let's talk about Skinner because we should recognize that so much of surveillance in ed-tech comes from his behaviorist bent — his belief that we can observe learning by monitoring behavior and we can enhance learning by engineering behavior.

Skinner fabricated a climate-controlled environment for his second child in 1944. First called the "baby tender" and then – and I kid you not – the "heir conditioner," the device was meant to replace the crib, the bassinet, and the playpen. Writing in Ladies Home Journal one year later, Skinner said,

When we decided to have another child, my wife and I felt that it was time to apply a little labor-saving invention and design to the problems of the nursery. We began by going over the disheartening schedule of the young mother, step by step. We asked only one question: Is this practice important for the physical and psychological health of the baby? When it was not, we marked it for elimination. Then the "gadgeteering" began.

The crib Skinner "gadgeteered" for his daughter was made of metal, larger than a typical crib, and higher off the ground – labor-saving, in part, through less bending over, Skinner argued. It had three solid walls, a roof, and a safety-glass pane at the front which could be lowered to move the baby in and out. Canvas was stretched across the bottom to create a floor, and the bedding was stored on a spool outside the crib, to be rolled in to replace soiled linen. It was soundproof and "dirt proof," Skinner said, but its key feature was that the crib was temperature-controlled, so save the diaper, the baby was kept unclothed and unbundled. Skinner argued that clothing created unnecessary laundry and inhibited the baby's movement and thus the baby's exploration of her world.

This was a labor-saving machine — see, we even talk about reproductive labor in terms of productivity — Skinner boasted that the air crib meant it only would take "about one and one-half hours each day to feed, change, and otherwise care for the baby." Skinner insisted that his daughter, who stayed in the crib for the first two years of her life, was not "socially starved and robbed of affection and mother love." He wrote in Ladies Home Journal that

The compartment does not ostracize the baby. The large window is no more of a social barrier than the bars of a crib. The baby follows what is going on in the room, smiles at passers-by, plays "peek-a-boo" games, and obviously delights in company. And she is handled, talked to, and played with whenever she is changed or fed, and each afternoon during a play period, which is becoming longer as she grows older.

This invention never caught on, in no small part because the title of that Ladies Home Journal article – "Baby in a Box"– connected the crib to the "Skinner's Box," the operant conditioning chamber that he had designed for his experiments on rats and pigeons, thus associating the crib with the rewards and pellets that Skinner used to modify these animals' behavior in his laboratory. Indeed, Skinner described the crib's design and the practices he and his wife developed for their infant daughter as an "experiment"– a word that he probably didn't really mean in a strict scientific sense but that possibly suggested to readers that this was a piece of lab equipment, not a piece of furniture suited for a baby or for the home. The article also opened with the phrase "in that brave new world which science is preparing for the housewife of the future," and many readers would have likely been familiar with Aldous Huxley's 1932 novel Brave New World, thus making the connection between the air crib and Huxley's dystopia in which reproduction and child-rearing were engineered and controlled by a techno-scientific authoritarian government. But most damning, perhaps, was the photo that accompanied the article: little Deborah Skinner enclosed in the crib, with her face and hands pressed up against the glass.

Skinner's call to automate child-rearing also coincided with the publication of Dr. Benjamin Spock's The Common Sense Book of Baby and Child Care, which rejected the behaviorist methods promoted by psychologists like John B. Watson — strict, data-driven timetables for feeding and toilet training and so on — and argued, contrary to this sort of techno-scientific endeavor, that mothers — and it was mothers — should be more flexible, more "loving," more "natural."

Now, there are plenty of problems with this naturalized domesticity, to be sure — a naturalized domesticity that is certainly imagined as white, affluent, and American. And some of these are problems that the teaching profession, particularly at the K-12 level, still wrestles with today — white womanhood.

The air crib, psychologists Ludy Benjamin and Elizabeth Nielsen-Gamman argue, was viewed at the time as a "technology of displacement"– "a device that interferes with the usual modes of contact for human beings, in this case, parent and child; that is it displaces the parent." It's a similar problem, those two scholars contend, to that faced by one of Skinner's other inventions, the teaching machine – a concept he came up with in 1953 after visiting Deborah's fourth-grade classroom. Skinner's inventions both failed to achieve widespread adoption, they argue, because they were seen as subverting valuable human relationships – relationships necessary to child development.

But I'm not sure that either the ideas of the teaching machine or the baby monitor were truly rejected. If nothing else, they keep getting repackaged and reintroduced — at home, at work, at school.

The question before us now, I'd argue, is whether or not we want behaviorist technologies — and again, I'd argue all behaviorist technologies are surveillance technologies — to be central to human development. Remember, B. F. Skinner didn't believe in freedom. If we do, then we have to reject not just the latest shiny gadgetry and anti-cheating bullshittery, but we have to reject over a century of psychotechnologies and pedagogies of oppression. That's a lot of work ahead for us.

But if we just bite off one chunk, one tiny chunk, let's make sure Proctorio is wildly unsuccessful in all its legal and its business endeavors.

Remember This Year

$
0
0

I have had "write year-in-review" on my To Do list for about a month-and-a-half now. But every day I ignore the task, hoping that I'll feel more like writing tomorrow. Tomorrow is the last day of this year, and I don't anticipate anything will change so I am going to try to type a few, very generic thoughts about what happened in ed-tech in 2020.

This reflection is not going to be as lengthy as in previous years. I don't have the heart right now. I can't revisit all the suffering this year has brought about. But there's one thing that I've learned penning these year-end posts for the last decade or so: not much changes. The themes are the same year after year after year: more surveillance, more inequality, more dismantling of public education by tech companies and by tech narratives about "the future of work" and so on. Even during a global pandemic — a year when I'm sure plenty of ed-tech evangelists will try to tell you that "everything changed" — what I think we have witnessed is an acceleration of certain trends that were already in motion rather than any major shift towards something new and different. More surveillance, more inequality, more dismantling of public education. "Cop shit."

In 2008, Clayton Christensen and Michael Horn gleefully predicted that, by 2019, half of all high school classes would be on the Internet. They were wrong, of course, and wrong by a long shot. And that all classes moved to the Internet in 2020 does not prove them right. That all classes moved online was not a triumph of online education, but rather a reflection of the steps schools had to take to prevent further loss of life and, no doubt, the result of the utter failure of leadership at all levels to prepare for the pandemic, let alone respond with compassion and care.

I'm sure that there will be many pronouncements about the failures of ed-tech this year. We've known for a very long time about what works and what doesn't work (although that's never stopped schools from investing heavily in the latest gadgets and gizmos with little attention to research). Unsurprisingly, the move to online education, facilitated by video-conferencing software and digital worksheets, hasn't been great — for teachers or students. Of course, face-to-face education wasn't so great for many teachers and students either.

It's been a cruel and terrible year — one that has changed many lives irrevocably. But will it change institutions? Will it change educational practices? I don't know. These callous monsters still demanded a college football season, so we know they'd rather see students and teachers die than make adjustments to tradition. (To revenue.)

I am certain that many education reformers and technology companies are hoping that they've managed to sink their claws securely into a fragile system, that crisis education becomes the permanent mode of operation. "There's no going back," they'll tell you. Again, I don't know. I think people long for a return to the Before Times and crave a Zoom-free life.

Then again, no one should want to go back — not to how things were.

Going forward, we have to build something better, not for the sake of the digital prophets — I cannot stress enough when I say "fuck those guys." We must build something better for the sake of an equitable and sustainable future, for the sake of democracy. And that future cannot be oriented around "cop shit." And folks, that means that future cannot be oriented around most ed-tech.

In my work, I write a lot about ed-tech amnesia— the ways in which the history of education and technology are forgotten, dismissed. What will we remember from 2020, and what will we forget? I often worry that we forget too much. As a result, we hold none of the monsters accountable, and so they return with new ventures, and we have to battle the bad ideas all over again. In my own small effort to fight the amnesia, I have written lengthy essays at the end of each year, detailing all that's happened, so that at least somewhere we have catalogued their names and misdeeds. Each year, I write, desperately hoping people will learn from the past.

But this year, I can't do it. I am too wrapped up in my own memories and my own grief. That said, this year, more than ever, I wonder and worry about what we are going to try to erase, telling ourselves and others that we must forget to move on from all this trauma. I don't believe that forgetting is the path to healing. But I admit, I can't see that path clearly from where I stand.


Pre-Order Teaching Machines

$
0
0

Teaching Machines is available for pre-order via the MIT Press website (and anywhere books are sold — consider supporting your local bookseller).

I spent a few days trying to revamp the Teaching Machines website — before deciding that the template I had was just fine. Now I'm in the process of updating all my various social media accounts with a new profile pic based off the cover.

I absolutely love the cover, by the way. I wanted this photo to be used, but I wasn't sure permissions could be arranged. (That is, I wasn't sure the copyright holder could even be found.) I'll be writing more in the coming days and weeks — all before the publication date of August 3 — about various stories in and not in the book. Why this particular photo matters to me is definitely one of those stories.

I am not on social media these days, which is going to make book promotion interesting. But hopefully I can get back in the routine of blogging more often — most likely on the Teaching Machines site for the foreseeable future, although perhaps on my own personal blog too. And at least those who pay attention to the particular bat signal of RSS will know I'm still around. Hack Education, however, remains on hiatus. (So update your feed readers accordingly.)

Book Birthday!

What Happened & What's Next

$
0
0

Hack Education, as perhaps you've noticed, has been on hiatus for a while. What with the pandemic, the death of my son, and the publication of Teaching Machines, I really couldn't continue to pay attention to the day-to-day nonsense of ed-tech. (The book, in fairness, did have me focused on some of the mid-20th century nonsense.) And after taking a long break from "current events," I am not quite sure I'm ready to face any of it again.

But I suppose I must. There is no safety net for freelance writers and independent scholars; no bereavement leave; no institutional support to help me get through tough times. I need to get back to work — that is, I need to start earning some money again. I have toyed with the idea of leaving the field altogether, letting this website go dormant permanently. I thought about looking for a job here in the Bay Area where I could pretend to be an instructional technologist or project manager or something along those lines. But I'm not sure who'd hire me.

Thanks to an invitation to speak to the UN — part of a research project on ed-tech and the privatization of school — I was reminded that it'd be an uphill battle to launch a brand new career in a different or even adjacent field — particularly at my age, particularly if I want to be recognized as a global expert.

But then again, do I? Expertise is kinda weird these days.

I know that — whatever I do — it has to involve some political bent, some fight for a better future for everyone. And while my work for the past couple of decades has been on education and technology as the means/obstacle to justice, I'm not sure that's a fight I care to engage in. The past year-plus — "pandemic schooling" — has demonstrated how much of what I've said and done and written about has been pretty fucking pointless: the bullshit goes on. Indeed, in the chaos, folks have doubled-down on the very worst aspects of ed-tech, peddling the horrors of surveillance and control as yet education salvation.

Even though I haven't really been on social media for the past year, even though I haven't seen a single headline or read a single RSS feed about ed-tech, I bet I can tell you exactly what happened in 2021. I bet all the issues are the same as I've covered in my Year in Review essays in the past: surveillance, behaviorism, white saviorism, exploitation, extraction, control.

I can't keep throwing myself at the machine to the detriment of my well-being — mentally, physically, financially. It's time to do something a little different.

Rather than focusing my attention on the day-to-day ridiculousness of ed-tech, I'm going to continue to write essays on the history of ed-tech. I believe that these can help illuminate why schools and ed-tech take the shape they do today. Despite my passionate indifference to ed-tech as an industry, I do remain fascinated by the stories that we tell about the history of the future of education and how these narratives are often invented and wielded by those peddling educational reforms. The first essay, coming later this month, will be about one of the key technologies invoked this way: the school bell.

I also plan to start writing about some of the (histories of) technologies of health and "wellness." There's an important overlap here with ed-tech — not just due to the heavy reliance on pseudoscience. Much as, in the last few years, education reformers and entrepreneurs have sought to promote "social emotional learning" as a new avenue for kids' well-being — or rather, data collection and compliance — technologists and investors promote "wellness" for workers, parents, and citizens alike.

These non-ed-tech essays will appear on my personal website. All of my writing will go out on the HEWN newsletter— no longer the Hack Education Weekly Newsletter, but rather a monthly one. HEWN will continue to be free, but you can support my work via Patreon (or PayPal or Venmo) — or, you can hire me to speak to your class, conference, etc. I'll even talk about ed-tech, if that's what you really, really, really want. And pigeons. There will, of course, still be pigeons.

The History of the School Bell

$
0
0

I'd wager it's the most frequently told story about ed-tech — one told with more gusto and more frequency even than "computers will revolutionize teaching" and "you can learn anything on YouTube." Indeed, someone invoked this story just the other day when chatting with me about the current shape and status of our education system: the school bell was implemented to acclimate students for life as factory workers, to train them to move and respond on command, their day broken into segments of time dictated by the machine rather than the rhythms of pre-industrial, rural life.

It's a story that seems plausible. The bell is a technology associated with behavioral conditioning, after all — Pavlov and his salivating dogs. It is a technology that organizes the school, controlling both space and time. The bell sounds out the logic of the day: it's time for math. It's time for recess. It's time for reading. Finally, thank god, around 3 o'clock or so, it's time to go home. And at the end of the school year, when "schools out completely," as Alice Cooper sings, the children cheer with joy as the final bell rings, the bell and their voices warping as the classic song fades out — freed from, as John Taylor Gatto put it, "the barren experiences of school bells in a prison of measured time." (1)

It should come as no surprise to close observers of invented histories of education that Gatto would have something to say (in almost all his books, in fact) about the tyranny of the bell. He was, after all, one of the most influential promoters of the "school-as-factory" narrative: that the origins of mass schooling are inextricably bound to the need to reshape a rebellious farming nation's sons and daughters into a docile, industrial workforce. It's a powerful, influential story, sure, but it's a pretty inaccurate history.

The bell also invokes another popular tale, often repeated by the same folks: the one in which schools haven't changed in hundreds of years. Some metal contraption still bangs in the hallways while the rest of the world has moved on to — gesturing widely — the digital. Need proof? Why, one can point to the fact that Alice Cooper's 1972 hit remains a popular, end-of-the-school-year anthem (as does "Another Brick in the Wall" which was also produced by Bob Erin who urged Pink Floyd to add a children's chorus as it was so successful in the Alice Cooper track. But I digress.) Surely this demonstrates how despicably moribund schools are, right?

Or at least, it shows how much we like stories about education that feel true — or maybe songs about education that make us feel like anti-establishment rebels.

Many institutions — not only schools and not only factories — have long used bells to mark beginnings and endings and important events. One can hardly point to the development of the mechanical clock and its connection to the strict observance of prayer times at monasteries and view the bell as a technology of liberation, no doubt. But one can perhaps reconsider citing John Taylor Gatto as your sole source of education history. (The guy called the people enslaved by Thomas Jefferson his "employees," for crying out loud.)

Bells, primarily handbells, have been a technology of school since their outset, well before "the factory" they were purportedly modeled on. They were used, as were the bells in churches, to summon students to ye old one room schoolhouse for the beginning of the day.

Architecture and Ed-Tech

The Common School movement that nineteenth century education reformer Horace Mann spearheaded (from roughly 1840-1880), advocating for the foundation of a public school system, did not just promote a common curriculum — an overt curriculum, that is, of reading, writing, and arithmetic or a covert curriculum of punctuality and obedience. It also advocated for the construction of standardized school buildings, replacing the one-room schoolhouses in urban areas. (It's worth noting that, even into the 1910s, half of the students in the US remained enrolled in the country's 212,000 one-room schools.) (2) Mann recommended that communities invest in a bell for these buildings. "Where the expense can be afforded, every schoolhouse should be provided with a bell. If not the only mode, it is probably the best one for insuring punctuality; and the importance of punctuality can hardly be overstated." (3)

The architecture of the school building informs the pedagogy that takes place therein — the same goes for the technologies that are implemented inside them. And that includes the school bell.

But bells weren't simply — or even primarily — a technology of pedagogy as much as one for announcements and alarms. Although companies like the Standard Electric Time Company (founded in Massachusetts in 1884) sold synchronized clock and bell systems to schools (and yes, factories), an early function of the latter was not to mimic the rhythm of the workplace but rather to warn occupants about fire.

(Insurance Engineering issued a widely-cited report in 1913, decrying the condition of some 250,000 schools in the US as "built to burn." "In 1911," the Moline, Illinois Dispatch worriedly detailed, "the value of school and college buildings destroyed by fire approximated $3,000,000. Estimates of the frequency of fires are as high as ten a week."(4) The story, incidentally, blames the introduction of a new piece of ed-tech for many of the blazes: the film projector.)

Bells and Platoons

The ringing of the bell to signal the beginning and end of a class period, rather than just the beginning and end of the school day is often traced to William Wirt, who became superintendent of schools in Gary, Indiana in 1908. Wirt, a student of progressive educator John Dewey, devised a system in which, when the bell rang, students would move from room to room for instruction, not only in reading, writing, and arithmetic, but in music and shop, as well as time outdoors on a playground.

Generally, children had two ninety-minute periods or three hours a day in the basic subjects, and six thirty minute periods in special subjects the other three hours of the school day. Obviously to function effectively this scheme required a high degree of administrative planning and precision timing in the moving of children. This was particularly true if the schools were large, as they were at Gary, where some of them included all twelve grades and eventually had as many as 3,000 students." (5)

Wirt called this the "work-study-play school," and Dewey praised the model in his 1915 book (co-written with daughter Evelyn) Schools of Tomorrow:

The question [Wirt] tried to answer was this: What did the Gary children need to make them good citizens and happy and prosperous human beings, and how could the money available for educational purposes supply all these needs? The industrial features of his schools will be taken up later, but it may be well to point out in passing that they were not instituted to turn out good workers for the steel company, nor to save the factories the expense of training their own workers, but for the educational value of the work they involved. In the same way it would be a mistake to consider the Gary schools simply as an attempt to take the unpromising immi- grant child and turn him into a self-supporting immigrant, or as an attempt to meet the demand of an industrial class for a certain sort of training. (6)

That John Dewey insisted what became known as the Gary Plan wasn't designed to condition students to become factory workers should maybe count for something. Maybe? It doesn't mean, of course, that the system didn't have incredible appeal to those reformers in the early twentieth century who were determined to reshape public education into a more efficient endeavor. Indeed as Callahan argues in his classic Education and the Cult of Efficiency, the Gary Plan was often showcased as an example of scientific management applied to schooling. But note: this was not because it trained children as workers but because it enabled a more efficient usage of the school building. That schools were empty at nights and on weekends and certain classrooms unused during the day was such a waste of money to those reformers, who argued that schools needed to be run more like businesses, indeed more like factories. ("Keep the students in the buildings year round, dammit!")

While this push for reform was largely administrative — a financial endeavor — there were concerns among parents and educators at the time that this system would have pedagogical, if not broader cultural implications. In a 1924 article in the New Republic titled "The Factory System," Chicago teacher Margaret Haley decried the Gary Plan, also known as the "platoon school." (School bells as a technology training students for the military — that's a Douglas Noble argument right there.) Clearly, she argued, the Gary Plan was simply an effort to lower the cost of education by enrolling more students than classrooms could hold, "dumping" the excess onto the playground or into auditorium or cafeteria spaces, and rotating them rapidly through classrooms so that, as a result, teachers would have hundreds of pupils per day. The platoon school was "the factory system carried into the public school, which needs only the closing-time whistle to make complete its identification with the great industrial plants!" (7)

Although the "platoon school" fell out of favor after 1930, with teachers and even some administrators decrying the Taylorization of education, the influence of efficiency-based reforms remained. Moreover, the adoption of the Carnegie Unit and the standardization of curricular requirements and teachers' workloads in the early twentieth century ha led to the adoption of a school schedule that appears, at least in some way, platoon-school-like: the day divided into 45-minute class periods.

This is a Public Service Announcement

But by and large, through much of the twentieth century, schools did not ring bells to move students from class to class, from room to room. Automated school bells, along with public announcement systems, were available but were not widely adopted until after World War II. Indeed, it was well into the 1960s that many schools finally wired every classroom up to an automated PA system so that the bell, rather than the teacher with an eye on the clock, dismissed class. (And in many communities, it was the PTA that led the fundraising for this bell equipment. You know the PTA, that bastion of bourgeois values so very committed to their children being trained by bells to become factory workers.)

(8)

That the ringing of the school bell was not part of some original and sinister strategy to habituate students for a life of labor doesn't mean the bell — like all technologies in or out of schools — didn't come with and be born from certain ideologies. But the school bell has a different, more complicated history than the "factory model schools" story tells it. It's worth understanding that history because to do so helps us understand the present and design the future. Schools haven't always or everywhere been modeled on factories, despite the efforts of business-minded reformers (still) to reshape them to that end for over a century. The bell hasn't always symbolized drudgery, and when it did signal compliance — and to be sure, it did — we need to think about what that expectation meant historically, not just rhetorically as we describe or decry education today. And don't even get me started on the phrase I've heard in some ed-tech circles, "cells and bells."

The history of education technology — and my rationale for writing this series of essays on the topic — should help us see the possibility for alternatives. Those who want us to forget (or mis-remember) the past are very much committed to our give up hope. Things weren't always this way; resistance is possible. That's all there's ever been, in fact — change— even with something as seemingly old and unchanging as the school bell.

(1) John Taylor Gatto, Weapons of Mass Instruction, 2009. p. 130.

(2) Jeffrey Lackney, "New Approaches for School Design." The Sage Handbook of Educational Leadership, 2011. p. 356.

(3) Horace Mann, "Supplementary Report on the Subject of Schoolhouses." (1838). Life and Works of Horace Mann, 1891. p. 486.

(4) "Local Schools in List of Dangerous." The Dispatch, Moline, Illinois, 16 April 1913.

(5) Raymond Callahan. Education and the Cult of Efficiency, 1962. p. 129.

(6) John and Evelyn Dewey. Schools of Tomorrow, 1915. p. 176.

(7) Callahan, p. 146.

(8) Archie, February 17, 1960.

Hope for the Future

$
0
0

This is the transcript of the keynote I gave today at Digifest. (Well, I recorded it a couple of weeks ago, but it was broadcast today, and I popped in for some "live" Q&A afterwards, where I was asked the obligatory "do you hate all ed-tech" question. And here I was, trying to be all sweetness and light...)

Thank you very much for inviting me to speak to you today. I'm happy to be here — and when I say "here," I do mean in my apartment, in sunny California. Personally, I'm not ready to travel yet, not remotely interested in getting back on an airplane — because let's be honest, air travel was already terrible. And I am pleased that, if nothing else good comes out of the pandemic, I can now do keynote talks in my bare feet and yoga pants.

Those of you who are familiar with my work are likely a bit taken aback to see that the title of my talk today contains the word "Hope." After all, I have been called "Ed-Tech's Cassandra," and like the Trojan priestess, I am known for my proclamations and predictions of doom. As such, I am not viewed as a messenger with good news. My talks, I'm often told, are "depressing" and make folks want to leave the field. (From time-to-time, when I do hear about some high-profile tech entrepreneur who has decided to turn away from "disrupting education" towards building "flying taxis" or whatever, I do like to think I played some small part in their frustrated disillusionment.)

The thing is, though, like Cassandra, I'm not wrong. I know that my work makes people feel uncomfortable, particularly when so much of the tech and ed-tech industry mantra rests on narratives promising a happy, shiny, better future. No one appreciates it when someone comes along and says "actually, if you wheel this giant horse — impressive as it looks — inside the gates, it'll destroy everything you love, everyone you care about. Don't do it."

And like Cassandra, it's exhausting to keep repeating "don't do it," and to have folks go right on ahead and do it anyway. I've been writing about ed-tech for over a decade now, cautioning people about the repercussions of handing over data, infrastructure, ideology, investment to Silicon Valley types. And for what?

And for what?

Trust me, I've spent a lot of the last few years stewing on that question — particularly as I have watched the awfulness of pandemic education (and its digital components) unfold: what has my work done? What change, what difference have I made?

For so many reasons, it's been hard not to sink into utter despair. I know many of us feel that way right now.

I don't think I'm exaggerating or even doomsaying to state that things don't look good. The pandemic. Still. Global climate change. (I say I'm glad to be in sunny California but it's actually the rainy season, and it hasn't rained in over a month.) Poverty. War and impending war. Inflation. Rising economic inequality and precarity. Drug overdoses. Genocide. Nationalism. White supremacy. Neoliberalism. Violence. Mass incarceration. Fanaticism. Surveillance technologies. Disinvestment from the public sphere. Anti-science rhetoric. A rejection of education. A rejection of knowledge. "Things are," as Malcolm Harris wrote on a sign during the Occupy Wall Street protests in Zucotti Park back in 2011 (and as he titled his recent book) "fucked up and bullshit." We aren't in a good place right now — structurally or socially or individually or psychologically or physically. And it's hard to see how we're headed towards a good place either.

Many of us are grieving; many of us are traumatized. Many of our students are grieving; many of our students are traumatized. Most, dare I say. All of us, even. Deeply and profoundly so. To ignore this, to minimize this, to pretend as though "back to normal" can or even should happen is injurious. To prescribe a piece of technology as some sort of solution or fix to any of this is insulting. To give a keynote full of sanitized sunshine is just gross.

And yet, we cannot not respond to trauma. We cannot not address what we have been through with the pandemic specifically and with — sweeping hand gesture — everything else.

Denial won't work. But the worst response, perhaps, is to forget (arguably a form of denial, but one that happens at a social not just on a psychological level).

And that's what many powerful forces want — it's what they always want. Because forgetting isn't just about sliding into complacency. Forgetting also leads to despair — and this is crucial to my argument today — and broadly speaking, this is why (I tell myself at least) my work matters. Contrary to those who dismiss my work as a critic as negative or destructive, criticism is generative; it can be one of many (small) acts of hope — criticism grounded in historical analysis is the antithesis of a numbing forgetfulness or an invented nostalgia.

One of the things I have written about quite a bit is the idea of an "ed-tech amnesia" — that is, there is a certain inattention to and erasure of the history of the field of education technology. And I don’t just mean forgetting or erasing what happened in the 1950s or 1980s (although I wrote a book on that). I mean forgetting what happened five, ten years ago — it's been ten years, incidentally, since Occupy Wall Street, and I fear we've forgotten that great push to hold the rich and the banks and the venture capitalists accountable. I specifically mean forgetting what happened during the last few years, during the pandemic.

Ed-tech amnesia.

Some of this is a result of an influx of Silicon Valley types in recent years — people with no ties to education or education technology who think that their ignorance and lack of expertise is a strength. (I use that phrase "Silicon Valley" less as a geographic marker than an ideological one.) And it doesn't help, of course, that there is, in general, a repudiation of history within the Silicon Valley framework itself. The tech industry's historical amnesia — the inability to learn about, to recognize, to remember what has come before — is deeply intertwined with the idea of "disruption" and its firm belief that new technologies are necessarily innovative and are always "progress." I like to cite, as an example, a New Yorker article from a few years ago, an interview with an Uber engineer who'd pleaded guilty to stealing Google's self-driving car technology. "The only thing that matters is the future," he told the magazine. "I don't even know why we study history. It's entertaining, I guess — the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn't really matter. You don't need to know that history to build on what they made. In technology, all that matters is tomorrow." (I could tie this attitude to the Italian Futurists and to fascism, but that’s a presentation for another day.)

There are other examples of this historical amnesia in ed-tech specifically, no doubt. Narratives about the “factory model of education." Stories that education hasn't changed in hundreds of years. Tall tales about the invention of the MOOC.

I want us to be vigilant about this amnesia because it has political implications. In the coming months and years, many people will want us to forget their mistakes; they will try to rehabilitate not just their bad ideas but their very reputations. By "many people," of course, I mean Ivanka Trump. Maybe Prince Charles. But I also mean any number of people in education and education technology, who've not only screwed up the tools and practices of teaching and learning over the past year or so, but who have a rather long history of bad if not dangerous ideas and decisions. These are people who have done real, substantive damage to students, to teachers, to public education. Again and again. We cannot forget this.

I worry we already have, of course.

If we forget, we cannot hold the perpetrators accountable for this damage. If we forget, we cannot see the ways in which we have been strong, resilient, even defiant in the face of it all.

"Memory produces hope in the same way that amnesia produces despair," the theologian Walter Brueggeman wrote. Rebecca Solnit cites him in her book Hope in the Dark, originally published in April 2004, at the start of the second term of President George W. Bush (speaking of people whose reputations have been rehabilitated), right as many of us despaired that his incompetence had led us to war, as well as to an expansion of governmental power and surveillance.

"Memory produces hope in the same way that amnesia produces despair." "It’s an extraordinary statement," Solnit writes, "one that reminds us that though hope is about the future, grounds for hope lie in the records and recollections of the past. We can tell of a past that was nothing but defeats and cruelties and injustices, or of a past that was some lovely golden age now irretrievably lost, or we can tell a more complicated and accurate story, one that has room for the best and worst, for atrocities and liberations, for grief and jubilation. A memory commensurate to the complexity of the past and the whole cast of participants, a memory that includes our power, produces that forward-directed energy called hope."

"Amnesia leads to despair in many ways," she continues. "The status quo would like you to believe it is immutable, inevitable, and invulnerable, and lack of memory of a dynamically changing world reinforces this view. In other words, when you don’t know how much things have changed, you don’t see that they are changing or that they can change."

The future is not pre-ordained. Yes, these are terrible times. Yes, the path forward seems incredibly challenging for those of us who believe in education, particularly public education, and who believe that education can be re-oriented away from exploitation and domination and towards justice.

Contrary to the popular story, there is no inevitability of a technological future of education. There's no inevitability to "online." There's not, despite how loudly ed-tech evangelists insist that "There's no going back now," so pleased that disaster capitalism has helped unlock the possibility they've longed for: one in which all teaching and learning is mediated through their digital platforms, in which labor unions are busted, in which public funding is eviscerated to make way for privatized profiteering.

Of course, there is no "back." Time doesn't work that way. And no one wants to go "back." That's a red herring, akin to thinking "luddite" is an insult. The luddites didn't want to go back; they wanted the future to be better. From where we are, there is always only forward. But the future is unwritten. Forward is open and incumbent upon us to shape.

"The best way to predict the future is to build it," computer scientist Alan Kay famously said. But computer scientists should not be alone in that building. (God forbid.)

I've long argued that the best way to predict the future is to issue a press release. Or the best way to predict the future is to complain about something in your weekly op-ed in The New York Times. By this, I mean that it's not always so much the building as it is the storytelling that sways the direction the future flows.

Again, that's why stories of the past matter as much as do stories of the future — even when (and especially when) predicting the future.

One of my favorite science fiction authors, Octavia Butler, was once asked about this:

'So do you really believe that in the future we’re going to have the kind of trouble you write about in your books?' a student asked me as I was signing books after a talk. The young man was referring to the troubles I’d described in Parable of the Sower and Parable of the Talents, novels that take place in a near future of increasing drug addiction and illiteracy, marked by the popularity of prisons and the unpopularity of public schools, the vast and growing gap between the rich and everyone else, and the whole nasty family of problems brought on by global warming.

'I didn’t make up the problems,' I pointed out. 'All I did was look around at the problems we’re neglecting now and give them about 30 years to grow into full-fledged disasters.'

'Okay,' the young man challenged. 'So what’s the answer?'

'There isn’t one,' I told him.

'No answer? You mean we’re just doomed?' He smiled as though he thought this might be a joke.

'No,' I said. 'I mean there’s no single answer that will solve all of our future problems. There’s no magic bullet. Instead there are thousands of answers — at least. You can be one of them if you choose to be.'

You can be one of the answers if you choose to be.

"Writing novels about the future doesn’t give me any special ability to foretell the future," Butler said. "But it does encourage me to use our past and present behaviors as guides to the kind of world we seem to be creating. The past, for example, is filled with repeating cycles of strength and weakness, wisdom and stupidity, empire and ashes. To study history is to study humanity. And to try to foretell the future without studying history is like trying to learn to read without bothering to learn the alphabet."

Too many people who try to predict the future of education and education technology have not bothered to learn the alphabet — the grammar of schooling, to borrow a phrase from education historian Larry Cuban. That grammar includes the beliefs and practices and memory of schooling — our collective memory, not just our own personal experiences of school. That collective memory — that's history.

When I wrote my book Teaching Machines, I wanted to chronicle a longer history of ed-tech than the story that often gets told — a history that, strangely enough when you think about it, often begins and ends with the computer. Through this framework, computers — "the digital" — are teleological. And those who question technology are therefore aberrant because technology is all there ever was, is, or will be.

I wanted to show in my book that many of the ideas that get bandied about today as innovative or (god forbid) revolutionary have a long history. Educational psychologists, for example, have been building technologies to "personalize education" for over a century. To recognize this is to see the legacy of their work in our objects and in our practices today; it is to understand that if these objects were constructed they can be challenged and dismantled. They are not natural. They are not inevitable. And it is to know too that there has always been resistance and refusal — successful resistance and refusal — to the vision of an automated education.

There has always been change — and that change has come from the popular power of students and teachers, not just the financial and political power of businessmen, always so desperate to center themselves in our stories of "transformation." We have always been strong. We have had to be resilient.

"Who controls the past controls the future. Who controls the present controls the past," as Rage Against the Machine sang in their 1999 song "Testify." OK, actually it's a quote from George Orwell's 1984, but hey. To control the past, we have to know our history. "The stories we tell about who we were and what we did shape what we can and will do," Rebecca Solnit argues.

And we can change the future. We have before. Never forget that. As my other favorite science fiction author Ursula Le Guin said, "any human power can be resisted and changed by human beings." And we must resist and we much change. We must believe we can. We must have hope.

A few years ago, I listened to a speaker who was quite critical — in passing, mind you — of hope. And she wasn't alone. There was so much energy channeled into that word when it became the slogan of a young Senator from Illinois and his starry-eyed Presidential campaign. After 8 years of Barack Obama in office and so many unfulfilled promises, so much disappointment (and of course the rise of a violent white nationalism in response) many people have lost hope. They've lost faith in hope. They're utterly disillusioned by it.

That speaker said that "you cannot counter structural inequality with good will." And I immediately thought of Antonio Gramsci, as one does. One of the greatest Marxist thinkers of the 20th century, Gramsci is best known for writing some 3000 pages of history during his imprisonment by the Fascist Italian government from 1927 until his death ten years later at age 46. Gramsci famously said "I am a pessimist because of intelligence but an optimist because of will." Will, according to Gramsci, is part of a revolutionary praxis. It recognizes the social structure, and it helps us to move deliberately from thinking about to acting for radical change. Will is, for Gramsci, political and intellectual. Will is a strategy, or part of a strategy of struggle. Will is bound of in the politics of hope. Hope is bound up in the politics of will.

You all came to this event because, I'd wager, of will. Good will. Willpower. A will to change your own pedagogical practices. A will to change your institutions. Will is necessary, politically. I hope that you will consider how to tie that will to action, to collective action. You are not alone. I believe that you came to education too because you believe in the future. You must to work in this field. Educators engage in the profound process and practice of engaging minds in change — intellectual transformation. Education straddles the past — "the curriculum" — and the future — individually and societally. Education is about what we learn today so we can be better tomorrow. Education is a practice of hope. You cannot be indifferent about the future and be an educator.

Another passage from Gramsci (who I put alongside Paolo Freire, Franz Fanon, bell hooks as one of the most important thinkers in education who's rarely recognized as such), from a 1917 letter "Indifferenti":

I hate the indifferent. I believe that life means taking sides. One who is really alive, can be nothing if not citizen and partisan. Indifference is lethargy: it is parasitism, not life. Therefore, I hate the indifferent.

Indifference is the dead weight of history. Indifference plays an important role in history. It plays a passive role, but it does play a role. It is fatality; it is something that cannot be counted on; it is something that disrupts programmes, overturns the best made plans; it is that awful something that chokes intelligence. What happens, the evil that touches everyone, happens because the majority relinquish their will to it, allowing the enactment of laws that only a revolution can revoke, letting men rise to power who, later, only a mutiny can remove.

... I am "partigiano", alive, and already I hear, in the consciences of those on my same side, the throbbing bustle of the city of the future that we are building. And in it, the social chain does not weigh on the shoulders of only a few .... There is no-one watching from the sidelines while others are sacrificed, bled dry. I am alive, partisan. And, therefore, I hate those who do not take sides; I hate the indifferent.

Memory counteracts indifference. Memory counteracts despair. Memory creates the space for hope. Memory reminds us: change is possible. It urges us: change is necessary.

It will not be easy. It never is. Even having hope can be hard, let alone making change.

"Hope is a discipline," prison abolitionist Mariame Kaba reminds us. It is not an emotion. It is not even as Gramsci put it simply "optimism" — a belief that things will get better. Hope is work. You have to put energy and time into it. You have to practice, repeatedly. You have to keep at it, keep moving, keep pushing. No one else will free you or fix you — except us, collectively through our power to imagine and build a better future.

Hope is not in technology. Hope is in our humanity.

A better future for all of us, for all living creatures on this planet does not look like an app or a platform or a gadget. It does not look like an institution founded hundreds of years ago, desperate to cling to old hierarchies. It does not look like an institution founded more recently, desperate to re-inscribe new hierarchies. Join me in refusing the old world, and in refusing the future envisioned by the techno-elite. Our refusals can be small. Our actions might seem insignificant. But do not despair. We aren't alone in this — resistance is part of our legacy. We can make it our future. We can hope.

The End

$
0
0

A couple of weeks ago, I received an email from my friend Eli Luberoff, the founder and CEO of Desmos. It was news I'd been anticipating — dreading, really — for some time: the startup had been acquired. Amplify was buying its curriculum division; the calculator part would become a free-standing public benefit corporation. The subject of the email from Eli said "good news," and I don't mean to imply that it isn't a good deal for him, for his employees, for his investors, or for Desmos users. But for me, well, it was a sign of something else altogether. (That said, let's check back in in a few years and see how this all has panned out, okay?)

For a long, long time, if anyone asked me if there was any ed-tech I liked — and I would get this question a lot, often asked as though it was some sort of "gotcha" — I'd reply in a heartbeat, "Desmos." I adore Eli; and Desmos has always had a great team, including, of course, the incredible Dan Meyer (who I also adore, even though I blame him whenever I chose the slowest check-out lane in the grocery store.)

I loved that Desmos' free online graphing calculator subverted the $100+ graphing calculator racket — a racket controlled by a couple of manufacturers and a handful of standardized test companies.

But even more than that, I loved that the spirit and culture of the company, which despite providing an instrument for math, was not strictly instrumentalist. This is absolutely a rarity in ed-tech, where almost everything is touted for its supposed productivity, efficiency, time- and cost-savings, student or learning or behavior management. Better, cheaper, faster, smarter — those are the values that most folks in ed-tech like to tout. And yes, I'm sure plenty of teachers used Desmos that way. But that wasn't the intent of Eli or Dan or even necessarily the design of the instrument, the graphing calculator. Kids made art with Desmos; kids made art with math; and with the Desmos curriculum, kids deliberated with and about math, a learning practice that runs counter to this firmly-held belief we have that math, unlike other fields of knowledge, is merely about getting a right or wrong answer and that the best way to develop and wield mathematical knowledge in school is to fill out worksheets as quickly as possible.

Desmos never bent its design or its trajectory, even in response to the most mundane usage, towards what are these common practices and pedagogies of ed-tech: "we can help students do their homework faster" or "we can help teachers automate their grading" or "check out our features that showcase some bullshit metrics that our investors like to see."

Now that the company has been acquired, I don't have an answer when someone asks me that "gotcha" question. You got me: "Nope. There's not a goddamn thing." And that certainly means it's time for me to step away from ed-tech for good.

I’ve already taken time away from this site to grieve the loss of my son. I’ve taken time away to write and promote my book. I’ve repeatedly told myself that I’m just tired from all of it — death, the pandemic, [gestures widely] etcetera — and that eventually my passion will return. But I don't think it's going to. It's time to move on to something else. I cannot, I will not be your Cassandra any more.

This site won't go away — I'll still pay for the domain for a while longer, at least — but the HEWN newsletter, the Patreon, and all Hack Education-related social media will. You'll be able to find my latest writing on my personal website. Remember blogging? Yeah. I'll do that for a while until I can figure something else out. I have to put this decade-long project to rest so that I can move on to something that doesn't consume me in its awfulness and make me dwell in doom.

The Return

$
0
0

Week in and week out, there are a litany of stories that, if I were paying attention to education technology, would prompt me to say “I told you so.” Why look, just in the last few days: coding bootcamps, pretty shady; Udemy, very shady. Google Notebook LLM in the classroom, totally and completely fucked up. But the industry's cheerleaders keep cheering.

I’ve spent the last few years trying my best to ignore all this stuff. After Isaiah’s death, I simply could not handle more grief. I’m not sure how else to describe the emotions that I feel about education -- this deep, deep sadness about about what feels like the surrender of any shared cultural belief in, any political commitment to, any financial investment in public education; a real despair about our handing teaching and learning over to the neoliberal elites, to the values of productivity, efficiency, individualism. I’d lost my son -- the future already felt dimmer for it. I didn’t have the energy to fight. I stepped away.

Since I’ve been gone, not much has changed -- other than I’ve been proven right about just about every startup and gadget I warned folks about. Not that that’s stopped anyone from hyping the latest snake oil -- it’s AI this time around (again) I guess. In the quest for data and “optimization,” we continue to drain the humanity from institutions that, frankly, were never that humane in the first place (despite all the teachers and students who populate these institutions and who do, I firmly believe, care deeply about one another).

I’ve flailed around since leaving Hack Education dormant, thinking I’d just do something else instead of being “ed-tech’s Cassandra.” I’ve become an athlete for the first time in my life, and I thought perhaps I could write a similar sort of tech criticism about fitness and health as I did about education. As hard as it is to become an athlete at 50+, let me just say, it’s just as hard to develop expertise in a new field of knowledge.

So for this and a million other reasons (and a couple of really specific reasons -- some dangerous dumb shit I've seen folks blog about recently, truth be told), I’m returning to education technology criticism -- it’s what I know. It’s what I’m good at. (Or at least some people tell me that.) I lost my son, but I’ll be damned if I’ll go quietly and let the techno-libertarian fantasies strip the future from the rest of us.

I have just started working on a book on artificial intelligence and education. A little bit of history -- we have been building “intelligent tutors” for decades now. Indeed what we consider “intelligence” is wrapped up in the history of education and testing (and eugenics -- probably worth pointing out). A little bit of science -- what do we know about how humans learn, and is this anything akin to “machine learning”? (Or has computer science shaped how we conceive of the mind?) A lot of technology criticism and literary analysis -- why are these stories about robot teachers so compelling? My plan right now is a collection of essays akin to the sorts of provocations that I’d once give as a speaker. Monsters of Ed-Tech: Back from the Dead. Or something.

I don’t plan to update Hack Education regularly. And since we’ve fucked up the Web, I don’t even know if folks will see this update or notice that anything has changed here. But if you stumble across this blog post, know that I do send out a newsletter, Second Breakfast, and there I’ll share some of my thoughts on AI and bodies and brains. (No, it's not free or "open" -- because that whole thing, as AI is showing us, is an absolute joke.)


Viewing all 875 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>