Quantcast
Channel: Hack Education
Viewing all 874 articles
Browse latest View live

The Business of 'Ed-Tech Trends'

$
0
0

This is the second part of my much-abbreviated look at the stories that were told about education technology in 2018– and in this case, the people who funded the storytellers.

When I first started working as a tech reporter, I assumed – naively – that venture capitalists were smart people who did thorough research before funding a company. They are, after all, typically investing someone else’s money. One should be conscientious, as such. Right? I assumed that they looked to see if the company could do what it promised – financially, technologically. I assumed they checked to see if the idea was sound, the founders trustworthy.

Ha.

One of the best books I read this year was Bad Blood by WSJ journalist John Carreyou. It chronicles the rise and fall of Theranos, a company that promised to revolutionize the medical industry by running a complete slate of tests using only a drop of blood (rather than the more voluminous quantity of blood that can be rather frightening to have drawn). Its founder, Elizabeth Holmes, dropped out of Stanford to start the company when she was just 19. With a media blitz that included a popular TED Talk, Holmes steered the company to a valuation of nearly $9 billion, raising over $700 million in funding. There was just one problem: it was all a deception. The technology did not work. And the phony test results put people’s lives at risk.

In June, Holmes and her boyfriend were indicted on a number of charges, including conspiracy and wire fraud, and they could face time in prison. (I bet they won’t.)

John Warner made the connection between Theranos and ed-tech in an op-ed published on Inside Higher Ed, noting the parallels between the claims about companies poised to revolutionize education, often made by those without much experience or expertise in teaching and learning and the claims that Holmes and her company made – claims that those with experience and expertise (biologists and chemists and medical engineers, for example) knew simply weren’t true.

Nevertheless Holmes was able to tap into a network of powerful, wealthy conservatives, including George Schultz and Henry Kissinger and others connected to the Hoover Institution at Stanford. Networks matter. They matter to who gets funded more than any of those naive ideas I had about the key being good ideas or good businesses or good people. Reporting in May, John Carreyou noted that among those who’d lost the most money by investing in Theranos were the Walton Family, Betsy DeVos, Rupert Murdoch, and Carlos Slim – all also major (conservative) funders of charter schools and ed-tech.

Bad blood indeed. It runs pretty thick.

Follow the Money


But hey, it was another record-setting year for ed-tech investment. Here’s the year by the numbers:

  • Investment dollars: $4.46 billion
  • Number of investment deals: 187
  • Average investment size: $26 million / Median investment size: $5.2 million
  • Number of acquisitions: 109
  • Number of spinoffs: 1
  • Number of mergers: 9
  • Number of IPOs: 4
  • Number added to the “ed-tech dead pool”: 10

All that money. All that activity.

A closer look reveals a lot of illness too, a lot of exploitation, a number of criminal convictions, and a lot of uncertainty about the shape of the future of education.

The Biggest Investments of 2018


The companies that raised the most money this year:

  • BYJU’s (tutoring): $540 million
  • VIPKID (tutoring): $500 million
  • Zuoyebang (tutoring): $350 million
  • Yuanfudao (tutoring): $250 million
  • 17zuoye (tutoring): $200 million
  • Peilian.com (music education): $150 million
  • DreamBox Learning (adaptive learning): $130 million
  • Zhangmen (tutoring): $120 million
  • Connexeo (school administration software): $110 million
  • DadaABC (English language learning): $100 million
  • Knowbox (tutoring): $100 million

(You can see the complete list of investments here.)

As far as I can tell, these are now the most well-funded ed-tech startups (that is, those education companies which have not gone public):

  • SoFi (student loans): $2.16 billion
  • VIPKID (tutoring): $825 million
  • CommonBond (student loans): $803.6 million
  • BYJUs (tutoring): $784 million
  • DadaABC.com (English language learning): $608.9 million
  • Zuoyebang (tutoring): $585 million
  • 17zuoye (tutoring): $585 million
  • EverFi (“critical skills” training): $251 million
  • Yuanfudao (tutoring) –- $244.2 million
  • Coursera (online education): $210.1 million
  • Knewton (adaptive learning): $182.3 million
  • Age of Learning (educational apps): $181.5 million
  • DreamBox Learning (adaptive learning): $175.6 million
  • Udemy (skills training): $173 million
  • AltSchool (private school; learning management system): $172.9 million
  • D2L (learning management system): $165 million
  • Udacity (skills training): $160 million

All great investments, I'm sure.

Notable Investment Trends


The most well-funded types of education company this year were those who offered tutoring. Tutoring, to be clear, here mostly means test prep.

It’s worth noting, I think, that tutoring as the preferential method of instruction is deeply intertwined with the long history of computer-assisted instruction and “intelligent tutoring systems.” Tutoring is the cornerstone of technological fantasies about “personalized learning.” You’ll often hear its advocates invoke Benjamin Bloom’s 1984 study on “the 2 Sigma Problem” and claim that one-on-one tutoring is radically effective at improving student outcomes. The results of Bloom’s investigation are a little suspect; or at least, they’ve never been replicated. But before the 1980s and since, many education technologists have been convinced that tutoring is the best form of instruction – far, far surpassing “traditional” classroom instruction – and that the only way that we can reach the goal of one tutor per child is to use the computer as the tutor.

Investors in tutoring companies this year included Warbug Pincus, Goldman Sachs, Learn Capital, Y Combinator, the Omidyar Network, Sequoia Capital, TAL Education, Tencent, Google, and of course, the Chan Zuckerberg Initiative.

Not all of these tutoring companies rely on AI or adaptive teaching, although that is a big selling point of some of them. Many of them take advantage of the “gig economy,” using low-wage freelance workers (many of whom are teachers working a second job) as tutors. It’s “flexible, interactive, and fun” sponsored content on Edsurge wants its readers to know.

Of the twenty some-odd tutoring companies that raised funding this year, ten were Chinese. As you can see from the list above, these accounted for some of the largest funding rounds of 2018, and Chinese tutoring companies are now some of the most well-funded education startups.

Chinese tutoring company TAL Education also invested in Edsurge this year– so watch its message about “personalized learning” and tutoring – a message backed by the Chan Zuckerberg Initiative too – grow much louder, with little recognition of how tutoring exacerbates educational inequalities.

Bad Investments and Unhealthy Markets


The record level of investment dollars belies the health of the education sector. 2018 saw the lowest number of deals in recent history, as larger companies gained multi-multi-million dollar rounds. Although the number of acquisitions appeared strong, many of those were deals by private equity, which is often a sign that a disastrous restructuring is poised to take place. (One of the most active acquirers this year was the private equity firm Francisco Partners, which bought Discovery Education, Renaissance Learning, and MyON.)

Neither Pearson nor Blackboard, companies that were quite active in gobbling up startups a year or so ago, bought a single company this year. Microsoft bought three companies in 2018. ACT and TurnItIn both bought two. (You can see the complete list of acquisitions here.)

Perhaps the biggest acquisition of the year was Edmodo. Well, it wasn’t big financially. It was big as in bad. China’s NetDragon paid $137.5 million for Edmodo – only about $15 million of which was cash. Not a great outcome for a company that had raised over $77 million in venture funding and had been lauded as the future of social learning. “EdTech fails to pay, again,” The Financial Times chuckled.

Why, it was just a few years ago that Pearson sold off The Financial Times, wasn’t it, in the hopes that a restructured company could make ed-tech pay. How quickly we forget…

The industry’s fixation on “the future of learning” certainly seems to prevent many people from taking a good look at the past. And my, isn’t it always fun to revisit some of ed-tech’s darlings of yore.

In 2017, for example, Edsurge informed its readers that“MissionU Says It Can Replace Traditional College With a One-Year Program.” One year later, in May of this year, Edsurge informed its readers that MissionU would cease operations. (You can see a list of all the companies that joined the ed-tech “deadpool” here.) In 2012, Wired Magazine proclaimed that Udacity “could change higher learning forever.” Techcrunch asserted that Udacity would “end college as we know it.” Udacity’s founder Sebastian Thrun predicted that “in 50 years, there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them.” This year, Udacity ended its money-back guarantee. It upped the price of its “nanodegrees.” (Its MOOC competitor edX also announced this year that many of its courses would no longer be free.) Udacity laid off about a quarter of its staff mid-year. And its CEO stepped down.

Vive la MOOC révolution.

The Privatization of Education


Speaking of MOOCs, Phil Hill astutely observed earlier this year that many MOOCs and for-profit companies were altering their products and services so as to become online program management providers: “If At First You Don’t Succeed, Try To Be An OPM.”

The growing reliance on OPMs is part of a larger trend of outsourcing and privatization – and certainly not all of this occurs online, as the “coding bootcamp” trend underscores. (Among the coding bootcamps raising money this year: Trilogy Education, which raised $50 million, and Galvanize, which raised $25 million. Among the startups that closed their doors this year: coding bootcamp Learners Guild.)

The original OPM is probably the learning management system. Watch everyone (AltSchool, Summit Learning) try to become LMSes.

Another recommendation for a 2018 book on how corporations are reshaping public education to suit their needs: The University of Nike by Joshua Hunt.

Ed-Tech and IT Authoritarianism


If I were writing my usual, lengthy series on the year’s “trends,” I certainly would have devoted one article to the connection between ed-tech and “IT authoritarianism.” I did make a brief nod to this when I wrote an article analyzing Edsurge’s latest round of funding – a round that included two Chinese investors, test prep company TAL Education and JMD.edu.

China is hardly the only country we should keep an eye on here. It’s hardly the only country willing to surveil, track, and punish students. The US excels at this too, no doubt. We have children in cages, and ICE monitoring schools.

The connections between tech and authoritarianism became a lot more obvious this year, I’d hope.

Perhaps the murder of Washington Post contributor Jamal Khashoggi doesn’t seem like a “business of education technology” story. But it matters.

The murder of Khashoggi is a story that is deeply connected to many of the people in the tech and VC industries. It shows how little those in Silicon Valley care for democracy. Silicon Valley has overtly courted Saudi wealth – it has a “Saudi Arabia Problem,” as NYT op-ed writer Anand Giridharadas put it. “Technology companies can no longer turn a blind eye to the human rights abuses of one of their largest investors,” he argued. (MIT, on the other hand, says it’s too lucrative to sever ties.)

At least one major investment vehicle, SoftBank’s Vision Fund – worth about $100 billion – is funded in part by Saudi money. The fund has backed high profile startups like Uber and Doordash and Slack, among others. Among its ed-tech investments: WeWork, SoFi, and perhaps soon Zuoyebang.

When the Saudi Crown Prince Mohammed bin Salman toured the US this year, he hung out with Mark Zuckerberg, Jeff Bezos, and Sergey Brin and others. Among those involved in bin Salman’s $500 billion “smart city” project, Neom, venture capitalist Marc Andreessen and former Uber CEO Travis Kalanick.

“Smart cities” are authoritarian cities. “Smart cities” are public spaces and public institutions, privatized. The data analysis and surveillance that are at the core of “smart cities” will surely include education data. “Smart cities” will be facilitated by ed-tech, thru the ed-tech industry. This is the future of learning that plenty of folks are hawking.

They’ll tell us they’re doing it for our own good.

The Business of Education Philanthropy


Charity is no substitute for justice withheld, as St. Augustine famously stated. And philanthropy is no substitute for not paying your taxes. But billionaires – tech billionaires and otherwise – all seem convinced that through their philanthropic efforts they can reshape education, reshape how education is funded and what is taught.

Two of the world’s richest men – Mark Zuckerberg and Bill Gates – were joined by more of the world’s richest men to that end this year. Jack Ma, China’s richest man, announced this fall he was retiring from Alibaba to focus on educational philanthropy. (Alibaba is already active politically in the US, this spring joining ALEC, the right-wing organization that “ghost-writes” state legislation to benefit its corporate members.) Jeff Bezos, the world’s richest man, announced this fall that he was creating a $2 billion fund to address homelessness and to start a chain of “Montessori-inspired” preschools. Like Amazon, but for preschool– “the child will be the customer,” Bezos wrote in his statement.

Bezos was hardly the only person interested in investing in early childhood education. As Rachel Cohen argued in response to his announcement, “preschool is a particularly appealing area for those who like conceptualizing problems in terms of market potential.” The child will be the customer; the child will be the investment, if you will.

Startups providing preschool-related ed-tech services were also popular investments, raising money this year from the likes of the Omidyar Network, Andreessen Horowitz, and the Chan Zuckerberg Initiative. Wonderschool, for example, took in $22 million in funding; Brightwheel took in $21 million.

Other areas of investment for Chan Zuckerberg Initiative in 2018 – that is, in addition to tutoring and preschool management software: career placement software, English language learning software, and financial aid management software. (The one ed-tech spinoff this year was the Summit Learning management system, which spun out of the Summit Public Schools charter chain – but not too far out of CZI.) CZI also made a number of grants, which Chalkbeat, a CZI grant recipient itself, helped to identify– the venture philanthropy company remains quite opaque about where its dollars goes.

The lovely thing about these philanthropists is how they fund education journalism to tell the stories they want folks to hear. It’s only later that those journalists say“oh damn, looks like that reform didn’t work out, eh?” But the great thing about being a billionaire tech philanthropist, I guess, is never having to say you’re sorry. Or rather, you never have to actually mean it.

I mean, who’s gonna hold you accountable?!


What's Next for Hack Education

$
0
0

I’m making some major changes to Hack Education this year – and no, not because I’ve received venture capital and my investors have told me I have to rewrite the publication’s mission statement again. That’s a different site you’re thinking of…

For a good portion of the year, Hack Education will go dormant. (The archives, I would note, are rich here. Read up.) I’m no longer going to write the Friday “Hack Education Weekly News,” my chronicle of all the education, technology, and ed-tech news of the week. I’m ending that series because I’m no longer going to write my lengthy series on the year-in-review either. Not in 2019 at least, as I have to focus on Teaching Machines (the draft is due in April and the final manuscript due in September).

Once the book is done, I plan to return to writing on this site and to publishing essays analyzing what’s happened and happening in education and technology. Many of these will be historical, since it seems that so many people who work in ed-tech are utterly ignorant (even purposefully ignorant) of the field’s rich (but troubling) history. Some of these will respond to the latest happenings, but I want the work I do here on Hack Education to be less reactive to the bad stuff that gets trotted out daily as “innovation.” I don’t want to have to read the ed-tech publications that make me so deeply sad about what’s being peddled and promoted and so deeply disappointed about people’s willingness to read and write and buy that crap. The way we get better and smarter will not be through these ed-tech’s marketers and hucksters.

I am hoping that the shift away from compiling all the goings-on for that weekly news round-up will give me more time to think deeply and critically about education and technology, instead of perpetually being enraged by how many terrible and silly things are marketed as “solutions” by folks who just want to sell a product or service – some aware, some unaware that their very well-funded load of futurist bullshit is pretty damn dystopian.

This site will remain vulture free. No investors, philanthropists, corporate sponsors, or advertisers. Just pigeons.

What's Next for Hack Education, Part 2

$
0
0

Last week marked the ninth anniversary of Hack Education. It seems a little strange to even make note of the occasion as it’s been almost six months since I’ve written anything here.

I finally completed the draft of Teaching Machines on 31 May. It still needs some work – some editing, some peer review, some image permissions, and such – but I’m pretty pleased with how it’s come together. It’s not at all the book that I initially imagined I’d write when I first thought, so many years ago now, that I’d look at the long history of attempts to automate education.

In order to write the book, I put a lot of the work I’d regularly done on this website “on pause” – the weekly round-up of education news, the newsletter, the end-of-year “trends” articles. And all along the way, I have been thinking a lot about what exactly I plan to do here when Teaching Machines is done. I don’t really have a good answer yet.

As it currently stands, I do not plan on resuming the weekly round-up I’d write every Friday; and as such, I don’t think the year-end articles will continue either – at least not in the (long) form they’ve long taken. I will, however, start writing HEWN again. (Soon.)

And I will write articles here on Hack Education – I have a lot of ideas and stories to tell. But what I refuse to do is pay quite such close attention to the day-to-day bullshit of the ed-tech industry – or, more broadly, to the day-to-day bullshit of social media. I really can’t tell you how much happier I am – despite all the stress of researching and writing a book – by being offline and tuned out.

I can promise you, ed-tech is not changing faster than it’s ever changed before. The people who want you to think that are hoping that they can wind you up and spin you ’round and knock you off center so that you’ll be less able to stand firm and resist their “disruption.”

Unsupported

$
0
0

I am making some changes to how this site earns money – or rather, Hack Education will not solicit any donations from readers for the foreseeable future. I am closing my Patreon account, and while I cannot cancel any recurring payments that are coming via PayPal – you have to do that – starting today, I will refund these as they come in.

I cannot in good conscience continue to take people’s cash as there is really nothing happening on this site. I am not quitting Hack Education or “ed-tech,” although god knows I’ve thought about it.

I still need to earn a living, of course. For now, I plan to do that through speaking and writing. I just don’t know how much of those words will appear here on this website.

The Teaching Machine Imaginary (And an Update on the Book)

$
0
0

Cross-posted to the Teaching Machines website

I submitted the draft of Teaching Machines to my editor at MIT Press at the end of May. I haven’t looked at the manuscript since, although I’ve been mulling over various parts and passages in my head almost nonstop. Over the course of the past few weeks, I’ve read a handful of books on writing and editing – Benjamin Dreyer’s Dreyer’s English, Stephen King’s On Writing, and Susan Bell’s The Artful Edit– but this week I turn my attention to revising the book. I’ve printed out all the pages and I’ll start making edits by hand – my preferred method. (It’s my preferred method of writing too but I do recognize that it is very, very slow. And so I type.)

One of the realizations I’ve had recently, having chatted casually with various non-ed-tech professionals about the book – my eye doctor, for example, and my hair stylist – is that I need to have much richer descriptions of what the teaching machines actually look like and what they actually do. (Yes, yes, yes. This is obvious.)

My elevator pitch for the book typically goes something like this: “It’s the story of the psychologists who, in the mid-twentieth century, built machines – not computers, before computers– that they claimed would automate and personalize education.”

That verb “claimed” makes it sound like this was a promise unfulfilled. And to a certain extent, that’s true. Teaching machines never sat on every desk in every classroom in America. But their proponents would insist that, when and where these devices were used – and they were used at home, at school, by the military, by manufacturers – that they did indeed enable people to learn at their own pace without the need for human instructors.

There is a gulf between what folks say education technology can do, of course, and what it actually does– as much today as in the 1950s and 1960s. And there’s a gulf too between what folks hear education technology can do, based on all the slogans and marketing, and what they imagine that actually means.

“When you say ‘teaching machines from the 1960s’,” one person recently told me, “I picture the robot teacher in The Jetsons.” (I’m never sure if The Jetsons are meant to reflect “the future” as much a nostalgia for an invented future past – I’m inclined to think it’s always been the latter, even when the show first aired. It is, in many ways, incredibly reactionary.)

The Jetsons did appear on primetime television during the peak of the teaching machine craze, but Mrs. Brainmocker, Elroy’s robot teacher (who is, I must presume by her title, a married robot teacher), appeared in just one episode – the last one of its 1960s incarnation – and only quite briefly at that.

In “Elroy’s Mob,” a scene at the Little Dipper School opens with Elroy talking through a math problem written on the blackboard. His answer is gibberish: “…And eight trillion to the third power times the nuclear hypotenuse equals the total sum of the trigonomic syndrome divided by the supersonic equation.”

“Now one second while I check over your answer,” Mrs. Brainmocker responds, rapidly clicking on the panel of buttons on her chest. “Boink!” A slip of paper emerges from the top of her (its?) head. “Absolutely correct, Elroy,” she reads. “You really know your elementary arithmetic.” As she begins to gush about what a pleasure it is to teach students like him, she starts to stutter. “I’ve got a short in one of my transistors,” she apologizes to the class.

Mrs. Brainmocker was, of course, more sophisticated than the teaching machines that were peddled to schools and to families at the time. These machines couldn’t talk. They couldn’t roll around the classroom and hand out report cards. Nevertheless, Mrs. Brainmocker’s teaching– her functionality as as a teaching machine, that is – is strikingly similar to the devices that were available to the public. Mrs. Brainmocker even looks a bit like Norman Crowder’s Autotutor, a machine released by U. S. Industries in 1960, which had a series of buttons on its front that the student would click on to input her answers and which dispensed a read-out from its top with her score.

Teaching machines and robot teachers were part of the Sixties’ cultural imaginary. But that imaginary – certainly in the case of The Jetsons– was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sit in desks in rows. The teacher still stands at the front of the class, disciplining students who aren’t paying attention. (In this episode, that’s school bully Kenny Countdown, who’s watching the one-millionth episode of The Flintstones on his TV watch.) There were other, more sweeping visions of the future of teaching machines in the late 1950s and early 1960s – Simon Ramo’s “A New Technique of Education,” for example, that featured pervasive surveillance, television-based instruction, and “push-button classes.” But it’s worth underscoring that often what gets touted excitedly as “the future of education” and what gets absorbed into the cultural imaginary about that future are very rarely all that different from the present. And once you look closely at the technologies in question and their associated practices in these futures, you’ll find that they’re very rarely that exciting. Digital worksheets and the like. Animated drill-and-kill.

Proponents of teaching machines in the 1950s and 1960s were quite aware that some of the excitement for their inventions was bound up in the novelty. Students responded enthusiastically to the new devices – but would that last? (A familiar concern to this day.)

In order to help write some prose about teaching machines, I recently bought one off of eBay – a Min/Max II manufactured by Teaching Machines Inc, along with a “programed” course on electricity.

The Min/Max was larger than I’d imagined – even though I’d read pages and pages of descriptions of the machine and thought I knew what to expect: about 18 inches long and 10 inches across. It was bulky, but as it was made from plastic, not particularly heavy. Even so, I had a hard time using it on my lap and it took up a lot of space on my desk. The only mechanical part of the machine: dials on each side used to advance the paper-based programming materials with a roller mechanism similar to a typewriter’s. The lid lifts to insert those papers – a max of 100 sheets, the Min/Max cautions.

The electricity course, for its part, contains 150 pieces of paper, printed on both sides, each side with about 5 “frames” of instructional content. I slid about half the papers into the machine, and spun the knob until the first few questions appeared in the clear plastic window at the top. The first few frames introduce the student to programmed instruction, explaining how to read the question, write the answer in the blank space given, then push the paper up until the answer can be checked. “The steps in this program are fixed so that you should be right most of the time,” the instructions explain. The course then gives a couple of sample questions that demonstrate. “If the answer to a question is a word to be filled in the blank, it is shown with a line like this ______. George Washington was the first ______ of the U.S.”

The first electricity question: “All matter is made of molecules (say MAHL-e-kules). Wood is made of molecules. Water is made of ______.” (The answer: molecules.) The course moves painfully slowly from there. It isn’t until question 9 that we get to atoms, question 17 that we get to electrons. I was bored well before then. Even with the promise that I could “move at my own pace,” that pace was necessarily slowed by the drudgery of the small steps in each frame. And I still had 1436 frames to go.

I want to be clear here: this drudgery isn’t simply a result of the teaching machine. It is tied to “programmed instruction,” to be sure. (That method of designing educational materials, and not the hardware, is one of the most important legacies of the teaching machine movement.) But it’s also simply (and sadly) part of the drudgery of schoolwork – drudgery that, as the children’s book Danny Dunn and the Homework Machine (1958) observed, was unlikely to be eliminated by computers. In fact, as Danny and his friends found out, students would just be expected to do more dull work more quickly.

The challenge, I think, is to express to readers this drudgery not only in contrast to the fantasies of shiny and efficient teaching machines – stories about robot teachers or otherwise – but also as the same sort of drudgery that today’s ed-tech dictates. Calling it “personalized learning” doesn’t make today’s computer-based instruction any more exciting. I promise you.

The History of the Future of the 'Learning Engineer'

$
0
0

A couple of years ago, MIT released a report titled “Online Education: A Catalyst for Higher Education Reforms.” Among its recommendations were to “support the expanding profession of the ‘learning engineer’” – a person who possesses “knowledge base in the learning sciences, familiarity with modern education technology, and an understanding of and practice with design principles,” but who also has a deep background in a specific academic discipline. The learning engineer would be a “new breed of professional.”

The report prompted a flurryofarticles about the current breed, if you will, of professional – those whose job titles include “instructional designer” and “instructional technologist” and whose degrees are in “education technology,” “curriculum and instruction,” and the like. Was “learning engineer” simply a rebranding? A way to rename one’s profession into power and prestige by associating it with the broader field – the lucrative field – of engineering? Or by associating it with elite, engineering-oriented institutions like MIT, Stanford, and Carnegie Mellon?

The MIT report credited Herbert Simon with coining the phrase “learning engineer,” citing an excerpt from his 1967 article “The Job of a College President” which appeared in the American Council on Education’s The Educational Record.

The learning engineers would have several responsibilities. The most important is that, working in collaboration with members of the faculty whose interest they can excite, they design and redesign learning experiences in particular disciplines. […] In particular, concrete demonstrations of increased learning effectiveness, on however small a scale initially, will be the most powerful means of persuading a faculty that a professional approach to their students’ learning can be an exciting and challenging part of their lives.

Simon was not, as the title of his article might suggest, a college president; he was a professor at Carnegie Mellon. But “like most faculty members,” he wrote in its opening sentence, “I have a vast experience of offering advice to college presidents – advice usually unsolicited, and often unheeded.” The advice in the article centered on making the operation of colleges more “professional,” as Simon accused them of being run by amateurs – by administrators who knew nothing about management and by professors who knew nothing about teaching or learning. “There is no simple path that will take us immediately from the contemporary amateurism of the college to the professional design of learning environments and learning experiences,” Simon wrote. “There are some obvious first steps along the way. The most important step is to find a place on the campus for a team of of individuals who are professionals in the design of learning environments – learning engineers, if you will.” Learning engineers would, he hoped, transform higher education. “Perhaps the discipline-oriented professor will prove as obsolete as the horse and buggy,” he suggested, giving way to faculty members versed in learning processes and “learning machines.”

Simon would not have seen himself as a “discipline oriented professor.” He worked across (even founding) a number of departments at Carnegie Mellon, conducting research in economics, public administration, computer science, artificial intelligence, and cognitive psychology. The latter areas certainly informed his conception of the learning engineer. Simon rejected constructivism and behaviorism as simplistic and scientifically inaccurate. For a glimpse, perhaps, of what Simon saw the learning engineer as building, one might look at the “intelligent” drill-and-kill math software he helped develop, later spun out of the university and into the company Carnegie Learning.

I’ve never been convinced that cognitive-informed ed-tech was as big of a break from behaviorism as its proponents would have you believe. But that’s another story. Nevertheless, it should be no surprise that, despite being hailed as “first,” Herbert Simon was not the only person to have argued that education needed to be better engineered. If nothing else, “behavioral engineering” was precisely how B. F. Skinner described his whole damn project, teaching machines and otherwise.

What does it mean for the future of education technology to tell a story about its past that centers on Herbert Simon (and by extension, Carnegie Mellon, artificial intelligence, and cognitive psychology)?

The use of the word “engineering” with some sort of educational adjective has a long history, one that certainly predates the “cognitive turn” – which is about as much history as folks in ed-tech seem to want to learn. John Dewey wrote about “education as engineering” in 1922, for crying out loud, although in fairness, in his short essay on the subject, Dewey admitted that “there is at present no art of educational engineering.” “There will be will not be any such art until considerable progress has been made in creating new modes of education in the home and school,” Dewey argued, because one can’t actually have an art of engineering until after one has sufficiently experimented and built something. As such, demanding that new educational practices adhere to an orthodoxy of science – “premature science” – “does endless harm.”

Others throughout the twentieth century have been much more jubilant about engineering’s potential to reshape education.

In 1945, W. W. Charters, the head of the Bureau of Educational Research at Ohio State University, asked “Is There a Field of Educational Engineering?” Charters contended that curriculum design was a form of engineering, and he argued that the phrase “educational engineers” could describe those knowledgable about statistics, psychology, and administration who were engaged in scientific exploration and problem-solving. (Statistics, psychology, and human capital management are the disciplinary features of educational engineering – to Herbert Simon as well.)

In 1957, Simon Ramo (“the father of the intercontinental ballistic missile”) called for the transformation of school into “a center run by administrators and clerks,” with only a very small number of highly skilled and intelligent teachers. Technological advancements would bring about a new profession, Ramo argued, the “teaching engineer,” “concerned with the educational process and with the design of the machines.”

Henry A. Bern, a psychologist for the Office of Naval Research, wrote about educational engineers in 1967, the same year as Simon’s article was published, observing the emergence of “powerful combines of industrial and publishing giants” – General Electric, IBM, Xerox, RCA, Raytheon, Random House, and Litton Industries – working to design teaching materials and teaching machines. “Whatever questions may exist about the consequences of these transactions,” Bern wrote, “there is little question about one consequence – the infusion of an engineering orientation and of engineers themselves into vital operations of education.”

Engineering is a social production not merely a scientific or technological one. And educational engineering is not just a profession; it is an explicitly commercial endeavor. For engineers, as David Noble has pointed out, are not only “the foremost agents of modern technology,” but also “the agents of corporate capital.”

“Learning engineers,” largely untethered from history and purposefully severed from the kind of commitment to democratic practices urged by Dewey, are poised to be the agents of surveillance capital.

Education Technology and The Age of Surveillance Capitalism

$
0
0

The future of education is technological. Necessarily so.

Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

Or so the story goes.

Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning– Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

…[T]he competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalismas Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

“I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

Ed-Tech Agitprop

$
0
0

This talk was delivered at OEB 2019 in Berlin. Or part of it was. I only had 20 minutes to speak, and what I wrote here is a bit more than what I could fit in that time-slot. You can find the complete slide-deck (with references) here

I am going to kick the hornet's nest this morning.

Before I do, let me say thank you very much for inviting me to OEB. I have never been able to make it to this event -- it's always coincided with the end-of-the-year series I've traditionally written for Hack Education. But I've put that series to rest while I focus on book-writing, and I'm thrilled to be able to join you here this year. (I'm also thrilled that the book is mostly done.)

I did speak, a few years ago, at the OEB Midsummit event in Iceland. It was, I will confess, one of the strangest experiences I've ever had on stage. A couple of men in the audience were obviously not pleased with my talk. They didn't like when I admonished other speakers for using the "Uber for education" catchphrase, or something. So they jeered. I can't say I've ever seen other keynote speakers at ed-tech events get heckled. But if nothing else, it helped underscore for me not only how the vast majority of ed-tech speakers give their talks with a righteous fury about today's education system that echoes that school-as-factory scene in Pink Floyd's The Wall -- a movie that is 40 years old, mind you -- and all starry-eyed about a future of education that will be (ironically) more automated, more algorithmic; and how too many people in the audience at ed-tech events want to chant "hey teacher! leave those kids alone" and then be reassured that, in the future, magically, technology will make everything happier.

I shared the stage that day in Iceland with a person who gave their talk on how, in the future, robots will love us -- how they will take care of us in our old age; how they will teach our classes; how they will raise our children. And while my criticism of exploitative "personalized learning" technology was booed, their predictions were cheered.

As this speaker listed the marvels of new digital technologies and a future of artificial intelligence, they claimed in passing that we can already "literally 3D print cats." And folks, let me assure you. We literally cannot.

But no one questioned them. No one batted an eye. No one seemed to stop and think, "hey wait, if you've made up this claim about 3D printing felines, perhaps you're also exaggerating the capabilities of AI, exaggerating our desires for robot love. Why should we trust what you tell us about the future?"

Why should an audience trust any of us up here? Why do you? And then again, why might you distrust me?

I've been thinking a lot lately about this storytelling that we speakers do -- it's part of what I call the "ed-tech imaginary." This includes the stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed. And despite all the talk about our being "data-driven," about the rigors of "learning sciences" and the like, much of the ed-tech imaginary is quite fanciful. Wizard of Oz pay-no-attention-to-the-man-behind-the-curtain kinds of stuff.

This storytelling seems to be quite powerful rhetorically, emotionally. It's influential internally, within the field of education and education technology. And it's influential externally -- that is, in convincing the general public about what the future of teaching and learning might look like, should look like, and making them fear that teaching and learning today are failing in particular ways. This storytelling hopes to set the agenda. Hence the title of my talk today: "Ed-Tech Agitprop" -- ed-tech agitation propaganda.

Arguably, the most powerful, most well-known story about the future of teaching and learning looks like this: [point to slide]. You can talk about Sugata Mitra or Ken Robinson's TED Talks all you want, but millions more people have watched this tale.

This science fiction creeps into presentations that claim to offer science fact. It creeps into promises about instantaneous learning, facilitated by alleged breakthroughs in brain science. Take Nicholas Negroponte, for example, the co-founder of the MIT Media Lab who in his 2014 TED Talk predicted that in 30 years time (which is, I guess, 25 years from now), you will swallow a pill and "know English," swallow a pill and "know Shakespeare."

What makes these stories appealing or even believable to some people? It's not science. It's "special effects." And The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Because of its speed? Its lack of teachers?

What does it mean in these stories -- in both the Wachowskis' and Negroponte's -- to "know"? To know Kung Fu or English or Shakespeare? It seems to me, at least, that knowing and knowledge here are decontextualized, cheapened. This is an hollowed-out epistemology, an epistemic poverty in which human experience and human culture are not valued.

I'm finishing up my book, as some of you know, on teaching machines. It's a history of the devices built in the mid-twentieth century (before computers) that psychologists like B. F. Skinner believed could be used to train people (much as he trained pigeons) and that would in the language of the time "individualize" education.

I am going to mostly sidestep a discussion about Teaching Machines for now -- I'll save that for the book tour. But I do want to quickly note the other meanings of the phrase -- the ones that show up in the "Google Alert" I have set for the title. In this formulation, it is the machines that are, ostensibly, being taught. Supposedly computer scientists are now teaching machines -- conditioning and training algorithms and machines, and as Skinner also believed, eliciting from them optimum learning behaviors. From time to time, my "teaching machines" Google Alert brings up other references to our fears that the students we are teaching are being reduced to robots, a suspicion about the increasing mechanization of students' lives -- of all of our lives, really. (The alert is never to the long history of instructional technology -- although my book will change that, I hope -- because we act like history doesn't matter.) Epistemic poverty once again.

This is my great concern with much of technology, particularly education technology: not that "artificial intelligence" will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans -- intellectually, emotionally, occupationally -- will be reduced to machines. We already see this when we talk on the phone with customer support; we see this in Amazon warehouses; and we see this in adaptive learning software. Humans being bent towards the machine.

Why the rush to mechanize? Why the glee? Why the excitement?

I think the answer in part lies in the stories that we tell about technology -- "the ed-tech imaginary." "Ed-tech agitprop."

And when I say "we," I do mean we -- those of us in this room, those of us behind the microphones at events like this, those of us whose livelihoods demand we tell or repeat stories about the future of education.

Agitprop is a portmanteau -- a combination of "agitation" and "propaganda," the shortened name of the Soviet Department for Agitation and Propaganda which was responsible for explaining communist ideology and convincing the people to support the party. This agitprop took a number of forms -- posters, press, radio, film, social networks -- all in the service of spreading the message of the revolution, in the service of shaping public beliefs, in the service of directing the country towards a particular future.

To suggest that storytelling in ed-tech is agitprop is not to suggest that it's part of some communist plot. But it is, I hope, to make clear that there is an agenda -- a political agenda and a powerful one at that -- and an ideology to our technologies, that come intertwined with an incredible sense of urgency. "We must adopt this or that new technology" -- or so the story goes -- "or else we will fall behind, or else we'll lose our jobs, or else our children won't be able to compete." "We must adopt this or that new technology" -- or so the story goes -- "in order to bring about the revolution."

Although agitprop is often associated with the Soviet control and dissemination of information, there emerged in the 1920s a strong tradition of agitprop art and theatre -- not just in the USSR. One of its best known proponents was my favorite playwright, Bertolt Brecht. Once upon a time, before I turned my attention to education technology, I was working on a PhD in Comparative Literature that drew on Brecht's Verfremdungseffekt, on the Russian Formalists' concept of ostranenie -- "defamiliarization." Take the familiar and make it unfamiliar. A radical act or so these artists and activists believed that would destabilize what has become naturalized, normalized, taken for some deep "truth." Something to shake us out of our complacency.

Perhaps nothing has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation.

One of my goals with this talk is then to "defamiliarize" these stories, to turn ed-tech agitprop back on itself. This is my theatre for you this morning. Politically, I want to make you rethink your stories. Practically, if nothing else, I want to make you rethink your slide decks. And that sucks, I guess. That'll be what gets me booed this time. But we must think more carefully about the stories that we are helping powerful corporations and powerful political forces to push. We need to stop and realize what's propaganda, what's mis- and disinformation, what's marketing, what's bullshit, what's potentially dangerous, damaging, storytelling, what's dystopian world-building.

I want to run through a series of stories -- story snippets, really. It's propaganda that I'm guessing you have heard, you'll hear again -- although hopefully not here at this event. I contend that these stories are based on no more science than is Neo's Kung-Fu lesson or Negroponte's Shakespeare pill.

Now, none of these stories is indisputably true. At best -- at best -- they are unverifiable. We do not know what the future holds; we can build predictive models, sure, but that's not what these are. Rather, these stories get told to steer the future in a certain direction, to steer dollars in a certain direction. (Alan Kay once said "the best way to predict the future is to build it," but I think, more accurately, "the best way to predict the future is to issue a press release," "the best way to predict the future is to invent statistics in your keynote.") These stories might "work" for some people. They can be dropped into a narrative to heighten the urgency that institutions simply must adapt to a changing world -- agitation propaganda.

Many of these stories contain numbers, and that makes them appear as though they're based on research, on data. But these numbers are often cited without any sources. There's often no indication of where the data might have come from. These are numerical fantasies about the future.

I don't have a lot of time up here, but I do want to quickly refute these stories -- you can certainly, on your own time, try to verify them. Look for the sources. Look at the footnotes. And then, of course, ask why this is the story that gets told, that gets repeated -- how does the story function. Benjamin Doxtdator has provided a great example of how to do this in an article titled "A Field Guide to 'jobs that don't exist yet'" in which he investigates the origins of the claim that "65% of children entering primary school today will end up in jobs that don't exist yet." He traces it through its appearances in OECD and World Economic Forum materials; through its invocation in Cathy Davidson's 2011 book Now You See It and an Australian jobs report that now no one can find; and he traces it all the way back to a quip made by President Bill Clinton in 1996. It's not a fact. It's a slogan. It's an invention. It's a lie.

The "half life of skills" claim seems to have a similarly convoluted and dubious origin. If you search for this claim, you'll find yourself tracing it through a long list of references -- EAB cites the World Economic Forum (an organization which seems to specialize in this sort of storytelling and it's worth asking why the fine folks who gather in Davos might craft these sorts of narratives). For its part, the World Economic Forum cites a marketing blog. That marketing blog cites John Seely Brown's book The New Culture of Learning. But if you search the text of that book, the phrase "half life of skills" doesn't appear anywhere. It's fabricated. And if you stop and think about it, it's nonsense. And yet the story fits so neatly into the broader narrative that we must all be engaged in constant "lifelong" learning, perpetually retraining ourselves, always in danger of being replaced by another worker or replaced by a machine, never questioning why there is no more social safety net.

We have to keep retraining ourselves (often on our own dime and our own time), so the story goes, because people no longer remain on the same job or in the same career. People are changing jobs more frequently than they ever have before, and they're doing more different kinds of work than they ever have before. Except they're not. Not in the US at least. Job transitioning has actually slowed in the past two decades -- yes even for millennials. While the occupational structure of the economy has changed substantially in the last hundred years, the pace of new types of jobs has also slowed since the 1970s. Despite all the talk of the "gig economy," about 90% of the workforce is still employed in a standard job situation -- about the same percentage that it's been since 1995. More people aren't becoming freelancers; more people aren't starting startups.

Certainly there are plenty of people who insist that this occupational stagnation is poised to be disrupted in the coming years because "robots are coming for your jobs." A word, please. Robots aren't coming for your jobs, but management may well be. Employers will be the ones to make the decision to replace human labor with automated labor. But even this talk about the impending AI revolution is speculative. It's agitation propaganda. The AI revolution has been impending for about 65 years now. But this time, this time I'm sure it's for real.

Another word: "robots are coming for your jobs" is one side of the coin; "immigrants are coming for your jobs" is the other. That is, it is the same coin. It's a coin often used to marshall fear and hatred, to make us feel insecure and threatened. It's the coin used in a sleight of hand to distract us from the profit-driven practices of capitalism. It's a coin used to divide us so we cannot solve our pressing global problems for all of us, together.

Is technology changing faster than it's ever changed before? It might feel like it is. Futurists might tell you it is. But many historians would disagree. Robert Gordon, for example, has argued that economic growth began in the late 19th century and took off in the early 20th century with the invention of "electricity, the internal combustion engine, the telephone, chemicals and plastics, and the diffusion to every urban household of clear running water and waste removal." Rapid technological change -- faster than ever before. But he argues that the growth from new technologies slowed by the 1970s. New technologies -- even new digital technologies -- he contends, are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not -- despite all the stories we hear -- necessarily restructure our world. Perhaps we're compelled to buy a new iPhone every year, but that doesn't mean that technology is changing faster than it's ever changed before. That just means we're trapped by Apple's planned obsolescence.

As historian Jill Lepore writes, "Futurists foretell inevitable outcomes by conjuring up inevitable pasts. People who are in the business of selling predictions need to present the past as predictable -- the ground truth, the test case. Machines are more predictable than people, and in histories written by futurists the machines just keep coming; depicting their march as unstoppable certifies the futurists' predictions. But machines don't just keep coming. They are funded, invented, built, sold, bought, and used by people who could just as easily not fund, invent, build, sell, buy, and use them. Machines don't drive history; people do. History is not a smart car."

We should want a future of human dignity and thriving and justice and security and care -- for everyone. Education is a core part of that. But dignity, thriving, justice, and care are rarely the focus of how we frame "the future of learning" or "the future of work." Robots will never care for us. Unbridled techno-solution will never offer justice. Lifelong learning isn't thriving when it is a symptom of economic precarity, of instability, of a disinvestment in the public good.

When the futures we hear predicted on stages like this turn so casually towards the dystopian, towards an embrace of the machine, towards an embrace of efficiency and inequality and fear -- and certainly that's the trajectory I feel that we are on with the narratives underpinning so much of ed-tech agitprop -- then we have failed. This is a massive failure of our politics, for sure, but it is also a massive failure of imagination. Do better.


The Stories We Were Told about Education Technology (2019)

$
0
0

Several months ago, I started to jot down ideas about what I'd cover in my annual review of what's happened over the course of the past 12 months in the field / industry /promotion of education technology. For the past decade, I've churned out a multi-part series on the dominant trends and narratives. Typically this project has taken all of November and all of December to write, based on all the Friday "week-in-reviews" that I’d written over the course of the year. But for the past year or so, I've turned my attention away from the day-to-day news as I've researched and written a book on teaching machines and the history of personalization and behaviorism.

So, I'm not sure I have a lot to say about what happened, what stories were told in 2019. I mean, I always have something to say, but this year, it’s not the well-researched, well-linked series that I've published in the past.

What I have noticed in 2019 from my perch of not-paying-full-attention would probably include these broad trends and narratives: the tangled business prospects of the ed-tech acronym market (the LMS, the OPM, the MOOC); the heightened (and inequitable) surveillance of students (and staff), increasingly justified as preventing school shootings; the fomentation of fears about the surveillance of Chinese tech companies and the Chinese government, rather than a recognition that American companies -- and surely the US education system itself -- has long perpetuated its own surveillance practices; and the last gasp of the (white, male) ed-tech/ed-reform evangelism, whose adherents seem quite angry that their bland hype machine is no longer uncritically lauded by a world that is becoming increasingly concerned about the biases and dangers of digital technologies.

Of course, the latter is just wishful thinking. And even if the wish comes true, the damage of ed-tech evangelism -- particularly the evangelism of trade publications that have spent the past decade repeating nothing but industry spin -- is already done. Even if these publications fade away, the breathless stories about the possibilities of brainwave-reading mindfulness headbands and "mind-reading robot tutors in the sky" continue to be told. Dangerous practices and products are normalized. Stories about the inevitable ed-tech revolution continue to be influential and persuasive, particularly among administrators and politicians and those who can spout the key points of the latest airport-bookstore business-section best-seller at their cocktail parties and in their wine caves. The ed-tech happy hour.

(Deliberate) Misinformation -- about what ed-tech can do, about the problems it will solve, about what sort of circumstances students and schools and society are now facing, about what sort of future new technologies will necessarily give us -- is picked up and wielded by far too many education leaders and decision-makers. That's what a decade of ed-tech social media and PR have wrought: hashtag gurus and fake news.

My hope for the new decade is we can keep the lies out of their mouths. Because it is from their mouths that these lies form the basis for policies and expenditures that have proven to be deeply damaging to education -- public education in particular. But we can't do so if we keep cultivating an ed-tech (information) ecosystem that fosters inaccuracies and inequalities.

(Coming soon: the Hack Education Ed-Tech Decade in Review)

Writing the Hack Education end-of-year series has always reminded me of how very short our memories seem to be. By December, we’ve forgotten what happened in January or June. And we’ve certainly forgotten what happened a year ago, two years ago, a decade ago. Or at least, that’s one way I can rationalize how someone like Chris Whittle can get such a glowing profile in The Washington Post this year for his latest entrepreneurial endeavor.

There are rarely consequences for the people — men mostly — in ed-tech who’ve demonstrated that they are dangerous or dishonest. Their reputations get laundered; their transgressions erased. Joi Ito, Nicholas Negroponte, the MIT Media Lab— who will remember their connections to convicted sex trafficker Jeffrey Epstein? Who will hold them accountable? How soon ’til they’re back in the game with a new story, a new startup?

One of the problems with a lot of ed-tech journalism, I’d argue, is that it is has not been particularly interested in accountability. Some of that, thankfully is changing. Trade publications have been far less committed to explaining what ed-tech is or does or was and far more committed to proselytizing what it might be or might do — all good, all positive of course. Far too many articles — and this is surely what its venture capitalist and philanthropist backers hope — have not reflected the landscape but have tried instead to shape it. That’s why stories about the golly-gee-whiz prospects of learning to code, game-based learning, social emotional learning, artificial intelligence, blockchain transcripts, and tutoring — by chatbots or by gig workers — still fill the pages of these publications. It’s not that these things are necessarily trends; it’s that certain folks very much hope they will be.

And so that we don’t forget and so that we can hold some of these people and companies accountable, here is a list of some of what did actually happen this year, jotted down for my own memory mostly and so I can see the full arc of the decade’s storytelling:

  • Silicon Valley darling AltSchool “called it quits” after raising almost $180 million and promising to personalize education through surveillance
  • WeWork closed its K-12 school this year. (It also had a failed IPO and ousted its co-founder.) People with knowledge of the school described it as “run like the company of WeWork, subject to constant changes or ‘disruption,’ sometimes without full consideration for the children these changes impact.” Good work, Adam and Rebekah
  • Pearson sold its US K-12 curriculum business for $250 million
  • Pearson said it would shift its strategy and make all its textbooks “digital first” (which you just know is going to cost students more)
  • Cengage and McGraw-Hill announced a merger that would make the new company the second largest textbook publisher in the US. (The deal still faces opposition)
  • David Wiley announced that this year’s OpenEd Conference would be the last
  • “Operation Varsity Blues.” News of the college admissions scandal, in which 50+ people including celebrities like Felicity Huffman and Lori Loughlin were involved in a criminal conspiracy to influence undergraduate admissions, broke in March. And the saga — sentencing, jail time, probation, and so on — has played out all year. It’s not an ed-tech story per se. But of course these sorts of things always prompt some bright-eyed entrepreneur to argue that “technology will fix it!” — in this case, the blockchain
  • The College Board planned to add an “adversity score” to the SAT, ostensibly to help college admissions officials recognize students’ socio-economic privilege, but had to go back to the drawing board after pushback on the idea
  • Instructure announced it would be acquired by private equity firm Thomas Bravo — although it’s not clear that the deal will actually go through. (This year, Instructure also acquired Portfolium and MasteryConnect)
  • PowerSchool acquired the K-12 LMS Schoology
  • Amazon got into the lesson-plans-for-sale business, launching a program called Ignite (not to be confused with its controversial lesson-plans-for-free business, Inspire)
  • Carnegie Mellon announced it would open source its digital learning software. Something about "learning engineers"
  • 2U did not have such a great year. The OPM company acquired the coding bootcamp Trilogy Education in April, but its stocks fell sharply when it had to adjust its growth projections for 2019
  • Udacity got a new CEO
  • Coursera raised $109 million in venture company because investors are cray
  • ASU ends its MOOC experiment, Global Freshman Academy
  • “Mind reading robo tutor in the sky” company Knewton was acquired by Wiley for $17 million— LOL — having raised over $180 million. LOL LOL LOL
  • Emily Tate wrote in July about what happens when an online tutor for VIPKid witnesses child abuse. Spoiler alert: nothing
  • There were far too many data breaches for me to list here — check out the K-12 Cyber Incident Map— but I will note the one at Pearson that exposed the data of some 13,000 school and university accounts (each of those potentially exposing thousands of students’ information)
  • Amazon was sued for Alexa’s recording of children’s voices without parents’ consent
  • Google was fined $170 million for violating children’s privacy on YouTube
  • Google got into the anti-plagiarism business
  • Plagiarism detector TurnItIn was acquired for $1.75 billion (which certainly explains why Google got into the anti-plagiarism business, eh?)
  • Flawed Algorithms Are Grading Millions of Students’ Essays,” Motherboard reported in August
  • The University of Alaska system braced for campus closures and layoffs after its operating budget was slashed by 41%. (Vultures, offering online alternatives, swooped in.)
  • Maker Media abruptly shut down operations and laid off its staff, but Edsurge — whose investors include Maker Media founder Dale Dougherty — insists that this will have no effect on the “maker movement” or on the Maker Education Initiative
  • ISTE announced it would acquire the ed-tech trade publication Edsurge

So much innovation and “edsurgency.” And if we’re not careful, if we do not hold these entrepreneurs and barkers and politicians accountable, if we do not remember their failures and falsehoods, then we will find that all this will just repeat itself on into the next decade.

The 100 Worst Ed-Tech Debacles of the Decade

$
0
0

For the past ten years, I have written a lengthy year-end series, documenting some of the dominant narratives and trends in education technology. I think it is worthwhile, as the decade draws to a close, to review those stories and to see how much (or how little) things have changed. You can read the series here: 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019.

I thought for a good long while about how best to summarize this decade, and inspired by the folks at The Verge, who published a list of “The 84 biggest flops, fails, and dead dreams of the decade in tech,” I decided to do something similar: chronicle for you a decade of ed-tech failures and fuck-ups and flawed ideas.

Oh yes, I’m sure you can come up with some rousing successes and some triumphant moments that made you thrilled about the 2010s and that give you hope for “the future of education.” Good for you. But that’s not my job. (And honestly, it’s probably not your job either.)


The 100 Worst Ed-Tech Debacles of the Decade. . . with Pigeons



100. The Horizon Report

In 2017, just a week before Christmas, the New Media Consortium abruptly announced its immediate closure “because of apparent errors and omissions by its former Controller and Chief Financial Officer.” The organization, which was founded in 1994, was best known for its annual Horizon Report, its list of predictions about the near-future of education technology. (Predictions that were consistently wrong.) But as the ed-tech sector is never willing to let a bad idea die, the report will live on. EDUCAUSE purchased many of NMC’s assets, and it says it will continue to publish the higher ed version of the Horizon Report. The Consortium for School Networking, CoSN, has taken up the mantle for the K-12 version.

99. The Promise of “Free”

The phrase “if you’re not paying for the product, you are the product” gets bandied about a lot — despite, according to Slate’s Will Oremus, it being rather an inaccurate, if not dangerous, slogan. That being said, if you’re using a piece of technology that’s free, it’s likely that your personal data is being sold to advertisers or at the very least hoarded as a potential asset (and used, for example, to develop some sort of feature or algorithm).

Certainly “free” works well for cash-strapped schools. It works well too for teachers wanting to bypass the procurement bureaucracy. It works well, that is, if you disregard student data privacy and security.

And “free” doesn’t last. Without revenue the company will go away. Or the company will have to start charging for the software. Or it will raise a bunch of venture capital to support its “free” offering for a while, and then the company will get acquired and the product will go away.

(It’s not that paying for a piece of technology will treat you any better, mind you.)

98. "Rich People’s Kids Don’t Use Tech" (and Other Stories about the Silicon Valley Elite)

Steve Jobs wouldn’t let his kids have iPads. Bill Gates wouldn’t let his kids have cellphones. Again and again, the media told stories — wildly popular stories, apparently — about how technology industry executives refuse to allow their own children to use the very products they were selling to the rest of us. The implication, according to one NYT article: “the digital gap between rich and poor kids is not what we expected.” The real digital divide, this article contends, is not that affluent children have access to better and faster technologies. (Um, they do.) It’s that their parents are opting them out of exposure to these technologies. (Despite a few anecdotes, they’re really not.)

There are, of course, vast inequalities in access to technology — in school and at home and otherwise — and in how these technologies get used. Affluent students get to digital tools for creative exploration; poor students get to use theirs for test prep. But the media narratives about wealthy, white families eschewing technology serve to reduce these sorts of inequalities to consumer decision-making and in doing so, they overlook important structural issues around schooling and parenting and so on. Writing in the Columbia Journalism Review, Anya Kamenetez described articles about tech-free rich kids as “howling missed opportunities. They were lacking relevant research, they drew misleading conclusions, and some of the anecdotal evidence they cited contradicted the central hooks of the stories.”

That’s not to say that parents from all walks of life aren’t concerned about their children’s technology usage. They are. But listening to the hand-wringing of the “tech regrets” industry— those Silicon Valley elite who are now remorseful about what their products and services have done to democracy — tell us we should simply make better choices isn’t going to get us anywhere.

97. Textbook Publishers vs. Boundless

In 2012, Pearson, Cengage Learning, and Macmillan Higher Education sued Boundless Learning, claiming that the open education textbook startup had“stolen the creative expression of their authors and editors, violating their intellectual-property rights.” At issue, the publishers claimed, was primarily the way in which Boundless created its materials, which the startup marketed as direct, sometimes page-for-page alternatives to titles by those three. Boundless, the publishers charged, hired people specifically to copy and paraphrase their textbooks, copying (with slight modification) not only the organization of a particular textbook but some of the illustrations as well. This “reverse engineering,” the publishers claimed, violated copyright.

At the time, David Wiley expressed his concern that the lawsuit could jeopardize the larger OER movement, if nothing else, by associating open educational materials with piracy. His fears appeared to be unfounded, as Boundless settled the lawsuit with damage perhaps only to its own reputation. The startup was later sold to Valore Education in 2015, which was in turn acquired by Follett in 2016, which in turn shut down the Boundless site in 2017. Boundless’s materials have been archived by David Wiley’s company Lumen Learning.

96. Ning

In an era before Facebook or Edmodo, the social networking site Ning was, for a time, quite popular with educators. But in 2010, the company, co-founded by Gina Bianchini and Marc Andreessen, announced that it would no longer offer a free version. Following an outcry from teachers and schools who’d come to rely on the networks they’d created on the site, Ning announced that it was partnering with Pearson, who would pick up the tab for education sites (as long as users accepted a little Pearson branding on their Nings, of course).

In 2011, Ning was acquired by “lifestyle” site Glam Media for around $150 million. By 2016, Glam Media, rebranded as Mode Media, shut its doors, transferring ownership of Ning to Cyndx. So many lessons here about controlling your own data and not relying on free ed-tech products.

Rock Dove

95. "Roaming Autodidacts"

“[The] literature [on online education] was preoccupied with what I call ‘roaming autodidacts’. A roaming autodidact is a self-motivated, able learner that is simultaneously embedded in technocratic futures and disembedded from place, cultural, history, and markets. The roaming autodidact is almost always conceived as western, white, educated and male. As a result of designing for the roaming autodidact, we end up with a platform that understands learners as white and male, measuring learners’ task efficiencies against an unarticulated norm of western male whiteness. It is not an affirmative exclusion of poor students or bilingual learners or black students or older students, but it need not be affirmative to be effective. Looking across this literature, our imagined educational futures are a lot like science fiction movies: there’s a conspicuous absence of brown people and women” — Tressie McMillan Cottom, “Intersectionality and Critical Engagement With The Internet” (2015)

94. Online Grade Portals

A host of new products appeared this decade that purported to make it easier for parents to stay up-to-date with what their children were doing at school. Indeed, according to one story in The Atlantic, a school district in Colorado opted to do away with parent-teacher conferences entirely, encouraging parents instead to simply check online to see what their children were up to. Perhaps the district didn’t know what New York City learned when it audited its old data portal: it found that less than 3% of parents had ever logged in. (There are a variety of reasons for this: language barriers, lack of Internet access, incompatible devices, lack of training. But let’s be honest: these sites are often quite poorly designed.)

At the other extreme, online grade portals have prompted some parents to log in all the time and to obsess over their children’s grades. At least, that’s the concern of columnists in The Wall Street Journal and The New York Times. And certainly the expectation of many ed-tech products (and increasingly school policy) is that parents will do just this — participate in the incessant monitoring of student data. Bonus if you can charge parents money for this sort of peek into their kids’ worlds.

93. 3D Printing

3D printing, The Economist pronounced in 2012, was poised to bring about the third industrial revolution. (I know, I know. It’s hard to tell if we’re on the third, the fourth, or the eighteenth industrial revolution at this stage.) And like so many products on this list, 3D printing was hailed as a revolution in education, and schools were encouraged to reorient libraries and shop classes towards “maker spaces” which would give students opportunities to print their plastic designs. In 2013, 3D printer manufacturer MakerBot launchedits MakerBot Academy with a goal “to put a MakerBot Desktop 3D Printer in every school in America.” But, as Wired noted just a few years later, 3D printing was already another revolution that wasn’t. Despite all sorts of wild promises, plastic gizmos failed to revolutionize either education or manufacturing (and they’re not necessarily so great for the environment either). Go figure.

92. "The Flipped Classroom"

It was probably Sal Khan’s 2011 TED Talk “Let’s Use Video to Reinvent Education” and the flurry of media he received over the course of the following year or so that introduced the idea of the “flipped classroom” to most people. He didn’t invent the idea of video-taping instruction to watch at home and doing “homework” in the classroom instead; but history don’t matter in Silicon Valley. In a column in The Telegraph in 2010, Daniel Pink used the phrase “flip thinking” to describe the work of math teacher Karl Fisch, who’d upload his lectures to YouTube for his students to watch the night before class so that everyone could work together to solve problems in class. (Fisch, to his credit, made it clear he hadn’t invented the “flipped” practice either, pointing to the earlier work of Jonathan Bergmann and Aaron Sams.)

As a literature person, I can’t help but remark that the teaching practices in my field have typically involved this sort of “flip” in which you assign the readings as homework then ask students to come to class prepared to discuss it, not to listen to lecture.

The problem with the “flipped classroom” is less the pedagogy per se than the touting, once again, of this as “the real revolution,” as Techcrunch did. (The revolution is predicted almost weekly at Techcrunch apparently.) But the practice does raise a lot of questions: what expectations and assumptions are we making about students’ technology access at home when we assign them online videos to watch? Why are video-taped lectures so “revolutionary” if lectures themselves are supposedly not? (As Karim Ani, founder of Mathalicious pointed out in a Washington Post op-ed, “Experienced educators are concerned that when bad teaching happens in the classroom, it’s a crisis; but that when it happens on YouTube, it’s a ‘revolution.’”) And how much of the whole “flipped classroom” model is based on the practice of homework — a practice that is dubious at best and onerous at worst? As education author Alfie Kohn has long argued, homework represents a “second shift” for students, and there’s mixed evidence they get much out of it.

91. Unizin

WTF is Unizin?!

Collared Dove

90. "Ban Laptops" Op-Eds

For the past ten years, every ten months or so, someone would pen an op-ed claiming it was time to ban laptops in the classroom. Students don’t pay attention to lectures, they would argue. They don’t take good notes on their computers. They’re distracted. They’re distracting others. They perform poorly on end-of-semester standardized tests. And on and on and on. And on and on and on. And on. For their part, critics of laptop bans claimed the studies the op-eds frequently cite were flawed, reductive, and out-of-date. These critics argued that laptop bans are ableist and that they disadvantage students who opt to purchase digital textbooks rather than printed ones.

A “ban laptops” op-ed may be the greatest piece of ed-tech clickbait ever devised. It is the instructional designer and tenured professor’s signal — “to the barricades!” — and everyone snipes at the other side from the Twitter trenches for a week, until there’s an unspoken truce that lasts until the next “ban laptops” op-ed gets published. And everyone clicks and rages and snipes all over again.

89. Clickers

“Clickers” are definitely not new — indeed, in my research for Teaching Machines, I found examples of classroom response systems dating back to the 1950s. But the industry likes to pretend the idea is somehow innovative and that the practice of asking questions to a large audience and having them race to answer via their phones is somehow fun. (The greatest trick the ed-tech devil ever played was convincing people that clicking was “active learning.”) Startups making classroom response apps have raised tens of millions of dollars this decade. And students, for their part, have now created hacks and bots to crash these quizzes.

88. Bundling Textbooks with Tuition

“To Save Students Money, Colleges May Force a Switch to E-Textbooks,” The Chronicle of Higher Education reported in 2010. The key word in that headline isn’t “digital”; it’s “force.” The story examined a proposed practice: “Colleges require students to pay a course-materials fee, which would be used to buy e-books for all of them (whatever text the professor recommends, just as in the old model).” That is, rather than students buying textbooks separately (or buying used textbooks or checking them out from the library or sharing them with a roommate not buying textbooks at all), the cost would be bundled with tuition and fees. Students would be required to pay. (Students, no surprise, weren’t thrilled by the idea.)

Rafter, a course material provider that was an early advocate of and provider for this bundling of textbooks and tuition, closed its doors in 2016, having raised more than $86 million.

87. Amazon Inspire

Amazon unveiled its new lesson sharing site for openly licensed materials in June 2016. And one might think that Amazon would be able to pull something like this off to great success. After all, Amazon knows how to run online marketplaces; Amazon knows how to sell texts. But just one day later, the company had to remove several pieces of content from the site as educators complained that the materials were copyrighted. And what’s worse: two of those were items included in the marketing materials Amazon had circulated to the press covering the launch. (These materials were apparently lifted from rival teacherspayteachers.com.) “Although the complaints about the Amazon site have been few,” The New York Times reported, “the copyright issue that emerged on Tuesday suggested that Amazon Inspire did not have an effective procedure to independently vet the copyright status of materials that teachers uploaded and shared.”

More than three years later, Amazon Inspire is still in “beta.” But Amazon has since launched a new marketplace, Amazon Ignite, where teachers can sell not share their class materials.

86. Badges

In 2011, the Mozilla Foundation unveiled its “Open Badges Project,” “an effort to make it easy to issue and share digital learning badges across the web.” Along with the MacArthur Foundation and HASTAC, Mozilla announced the launch of a $2 million competition to design and test new badges. US Secretary of Education Arne Duncan called badges a “game-changing strategy.”

Reader, they were not.

Although sometimes talked about as a “movement,” Mozilla’s Open Badges Project was really more of a technical specification, one that was transferred from Mozilla to IMS Global Learning Consortium in 2017. In 2018, Mozilla said it would retire Backpack, its platform for sharing and displaying badges, and would help users move their badges to Badgr, software developed by the tech company Concentric Sky.

Despite predictions that badges would be the “new credential” and that we were looking at a “Future Full of Badges,” it’s not clear that digital badges have provided us with a really meaningful way to assess skills or expertise.

Wedge Tailed Green Pigeon

85. The Teacher Influencer Hustle

“Teachers Are Moonlighting As Instagram Influencers To Make Ends Meet,” Buzzfeed reported in 2018. Of course, teachers have utilized social media sites for years to launch various side-hustles — speaking gigs and “brand ambassadorships”, for example — as well as to facilitate their main hustle — you know, teaching. Indeed, DonorsChoose.org expects teachers to leverage their social media presence in order to fundraise for supplies for their classrooms. But the rise of the “teacher influencer” is about more than asking for donations for books or musical instruments; it’s about peddling certain products and certain stories about what classrooms should look like. (“Why the 21st Century Classroom May Remind You of Starbucks,” for example.)

“I am in this profession for the kids," these celebrity educators insist, not for money or fame. But altruism is not the same as justice.

Teacher influencers’ classrooms are rarely representative of what K-12 education looks like in the US: white and affluent. And their high profile presence on Instagram, Twitter, and the like to exacerbate inequality and re-inscribe a conspicuous consumption of gadgetry as a sign of “innovation.”

84. Ed-Tech Startup Accelerators

For a time, it seemed like some of the most popular new startups were emerging from one incubator or accelerator program or another — education-focused or otherwise: Remind and ClassDojo from Imagine K12, Clever and Lambda School from Y Combinator, for example. Accelerator programs offered early-stage startups mentorship, office space, and funding, often aiding them with product development, introductions to investors, and customer acquisition (in the case of ed-tech startups, this typically meant adoption of the product by a charter school chain). The number of these programs skyrocketed over the course of the decade — helping to flood the industry with a bunch strangely named companies. Accelerators that specialized solely in education included StartL, Imagine K12, Socratic Labs, and co.lab, to name a few.

Almost all of these have since shuttered — Imagine K12 was acquired by Y Combinator in 2016 — as they struggled to actually help entrepreneurs, many with absolutely no experience in education, build products that teachers or students wanted or needed. (Indeed, some clearly struggled mightily to run a program that was worth anyone’s time and effort.)

83. Shark Tank for Schools

In the reality TV show Shark Tank, budding entrepreneurs pitch a panel of investors — “sharks” — on their ideas. Sometimes they strike a deal. More often than not, the entrepreneurs walk away empty handed. A few education technology companies have landed investment on the show, including Packback (which was initially a textbook rental company but has since pivoted to the ol’ “AI-powered” Q&A site.)

But Shark Tank isn’t just some awful Mark Burnett / Mark Cuban vision for the future of reality TV. It’s now become a way in which educational events, organizations, and institutions dole out funding for projects. Because what education needs is more high-style, low-ethics cutthroat theatrics as the basis for decision-making. Clearly. What education needs is to model itself on a TV show where people of color and white women are grossly underrepresented, where they strike much smaller deals (when they strike deals at all) than white men.

That’s gonna be a “no” for me, dawg.

82. "The End of Library" Stories (and the Software that Seems to Support That)

If there was one hate-read that stuck with me through the entire decade, it was this one by Techcrunch’s M. G. Siegler: “The End of the Library.” It was clickbait for sure, and perhaps it came too early in the decade — this was 2013 — for me to be wise enough to avoid this sort of trollish nonsense. You can learn anything online, Siegler argued. (You can’t.) The Internet has “replaced the importance of libraries as a repository for knowledge. And digital distribution has replaced the role of a library as a central hub for obtaining the containers of such knowledge: books. And digital bits have replaced the need to cut down trees to make paper and waste ink to create those books.” (They haven’t.)

Libraries haven’t gone away — they’re still frequently visited, despite dramatic drops in public funding. More and more public libraries have started eliminating fines too because libraries, unlike Techcrunch writers, do care to alleviate inequality.

But new technology hasn’t made it easy. Publishers have sought to restrict libraries’ access to e-book lending, for example, blaming libraries for declining sales. And libraries have also struggled to maintain their long commitment to patron privacy in the face of new software — e-books and otherwise — that has no such respect for users’ rights.

81. Interactive Whiteboards

According to some team of industry analyst hustlers, the interactive whiteboard market is poised to hit $2.36 billion by 2025. They’ll sell you the report for $6000, and you can read all the details. A cheaper report — just $5000— dangles the promise that the market will hit $5.13 billion by 2023. All this is to say that you can still find plenty of salespeople for interactive whiteboards; and you can still find plenty of schools shelling out the money for them. Mock them all you want, interactive whiteboards remain big business.

This decade did find some of the best-known IWB-makers getting acquired: Chinese hardware maker Foxconn bought Smart Technologies; Chinese gaming company NetDragon bought Promethean. So perhaps the death of the interactive whiteboard has been wildly exaggerated by Silicon Valley technologies touting their own vision of “personalization” and individualized usage of devices.

Green Imperial Pigeon

80. Viral School Videos

Viral videos weren’t new this decade, but the popularity of cellphones, along with the introduction of social media and its associated incentives to “go viral,” meant that viral videos became more common. Students recorded fellow students. Students recorded their teachers. They recorded school resource officers. Schools recorded elaborate lip-sync videos. Schools installed surveillance cameras and recorded everyone. While some of this might seem fun and harmless, the ubiquity of these videos signifies how much surveillance (and sousveillance) has been accepted in schools. No doubt, some of these videos helped highlight the violence that students — particularly students of color — face at the hands of disciplinarians. But other times, the scenes that were captured were taken out of context or these videos were purposefully weaponized (by conservative groups, for example, who encourage students to film their professors’ lectures and submit their names to “watch lists”).

79. Altius Education

In 2013, an accreditor ordered Tiffin University to stop enrolling students in its Ivy Bridge College, an online program run by the venture-backed firm Altius Education. The accreditor expressed concern about the oversight Tiffin actually had over the program, about the quality of the academics, and about the outsourcing of instruction. Implied: that Altius had in effect “bought” accreditation for its for-profit program by partnering with Tiffin. The Department of Justice launched an investigation into the company, which according to The Chronicle of Higher Education, was “part of a False Claims Act investigation that could cover Ivy Bridge's student-recruiting practices, academic-integrity issues, and student-loan policies, as well as the corporate relationship between Altius and Tiffin.”

Reporting on the dust-up was incredibly revealing about what publications believed Silicon Valley’s role should be in “disrupting” higher education. Buzzfeed, for example, described the accreditor an “118-year old” body — “How the Old Guard Shut Down an Experiment in Education” — headed by, of all things, a “Victorian literature scholar.” And this was story that Altius Education founder Paul Freedman worked very hard to make sure was repeated in the media: “They've boxed the innovators out of higher education.”

Of course, “they” haven’t, and much of the work to crack down on the fraudulent behavior of for-profit universities that occurred during the Obama Administration has come to a halt under Trump. And for his part, Freedman is still active in for-profit education reform efforts, now as the founder of Entangled Ventures, which The Chronicle of Higher Education describes as “Part incubator, part investment fund, part consultant, and part reseller of services.”

78. The Fake Online University Sting

The list of high profile “fake online universities” is fairly long. Indeed, the POTUS once ran one. (He paid a $25 million fine in 2018 to settle fraud claims relating to the now defunct Trump University, but please, go on with how much he’s the anti-corruption President.) And the federal government itself ran fake online universities too.

In April 2016, the Department of Homeland Security arrested 21 people, charging them with conspiracy to commit visa fraud. These individuals were alleged to have helped some one thousand foreign nationals maintain their student visas through a “pay to stay” college in New Jersey.

The college — the University of Northern New Jersey — was a scam, one created by Homeland Security itself. The school employed no professors and held no classes. Its sole purpose was to lure recruiters in turn into convincing foreign nationals to enroll — all this in exchange for a Form I–20 which allows full-time students to apply for a F–1 student visa.

It was an elaborate scam, dating back to 2012, but one that gave out many online signals that the school was “real.” The University of Northern New Jersey had a website — one with a .edu domain, to boot — as well as several active social media profiles. There was a regularly updated Facebook page, a Twitter account, as well as a LinkedIn profile for its supposed president.

Students who’d obtained their visas through the University of Northern New Jersey claimed they were victims of the government’s sting; the government said they were complicit. According to one student, he had asked why he wasn’t required to take any classes, and he’d been told by the recruiter that he could still earn credits through working. “Thinking back, it’s suspicious in hindsight, but I’m not really familiar with immigration law,” the student told Buzzfeed. “And I’d never gotten my Ph.D. before. So I thought maybe this is the way it works.”

“I thought maybe this is the way it works.”

With all the charges of fraud and deceptive marketing levied against post-secondary institutions this decade — from ITT to coding bootcamps, from Trump University to the Draper University of Heroes— we might ask if, indeed, this is the way it works now.

77. Course Signals

Course Signals, a software product developed by Purdue University, was designed to boost “student success” by using learning analytics to inform teachers, students, and staff to potential problems, labeling students with a red/yellow/green scheme to indicate their danger in failing a course. Purdue boasted that Course Signals increased retention by 21%, making it an incredibly effective tool. Students who took Course Signals courses were more likely to succeed, the university said, and taking two Course Signals courses was “the magic number.”

There was pushback about these claims, with Mike Caulfield suggesting that perhaps “students are taking more Course Signals courses because they persist, rather than persisting because they are taking more Signals courses.” Alfred Essa, McGraw-Hill Education’s vice president of R&D, ran a simulation that confirmed Caulfield’s suspicions: that what the Purdue team was pointing to in the effectiveness of Course Signals was actually a “reverse causality.” (Students who completed courses were more likely to complete courses.) Michael Feldstein argued that the “meaninglessness” of Purdue’s claims were important for the ed-tech community to grapple with, in no small part because many other learning analytics systems had been modeled on the Course Signals work. And as Essa told Inside Higher Ed, the problems with the data and analysis also raised questions about education technology research more broadly: “Maybe one of the conclusions that could be derived from this is that we really don’t have a strong community to test and validate these claims? Maybe that’s really the starting point of discussion in the academic community. As we move forward with new technologies in learning analytics, how and who will be evaluating the claims that people put forward?”

76. Channel One (and the Unsinkable Chris Whittle)

In May 2018, Channel One aired its last broadcast. The commercial-filled daily TV news programming service for schools began back in 1989, one of the earliest indications of the creep of brands into schools. It was also one of the earliest education endeavors of serial entrepreneur Chris Whittle. Whittle went on to have a string of business failures in the 1990s — well-documented in Samuel Abrams’s book The Commercial Mindset— most notably the for-profit flop Edison Schools.

I wrote about Whittle and his investment pal Michael Moe in an article in The Baffler in January of this year. But a gushing profile of Whittle in The Washington Post this fall— “The Master Salesman of For-Profit Education” — helps to underscore that there are no repercussions in the business of education for men with bad ideas and bad business sense.

Sulawesi Green Imperial Pigeon

75. Kno

In September 2010, Techcrunch founder Mike Arrington announced that Kno had raised $46 million from Andreessen-Horowitz and other investors. Kno was headed by Osman Rashid, the co-founder of the textbook rental company Chegg, and the tablet was aimed at the college market. According to Arrington, venture capitalist Marc Andreessen called the Kno tablet “the most powerful tablet anyone has ever made.” So powerful, in fact, that by the following year, Kno was in talks to sell off its hardware business and focus instead on student-facing software. Yet, it continued to raise money — almost $100 million in total. In 2013, Intel acquired Kno for $15 million — for “pennies on the dollar,” said one analyst — and rebranded the software as the Intel Education Study App. The Intel Education Study App has now too been discontinued.

74. "Deliverology"

In 2010, Sir Michael Barber published Deliverology 101: A Field Guide for Educational Leaders. The book packaged the ideas he’d developed during his time in the Blair Administration and at McKinsey & Company on how to successfully manage policy reform efforts. “Three critical components of the approach,” he wrote, “are the formation of a delivery unit, data collection for setting targets and trajectories, and the establishment of routines.” In 2011, Barber went to work for Pearson as its Chief Education Advisor, continuing his advocacy for competition, data collection, measurements, and standards-based reforms. (See David Kernohan’s excellent keynote at OpenEd13 for more.) In 2013, on the heels of “the Year of the MOOC,” Barber released a report titled “An Avalanche is Coming,” calling for the “unbundling” of higher education.

The work of Michael Barber underscores the importance of highly paid consultants in shaping what politicians and administrators believe the future should look like and in giving them a set of metrics-obsessed management practices to get there.

73. Pearson PARCC "Spies" on Students

In 2015, former Star-Ledger education reporter Bob Braun posted a screenshot to his blog of an email by a New Jersey superintendent. The email, he said, detailed a “Priority 1 Alert” that had been issued by Pearson and the New Jersey Department of Education, regarding a student who had tweeted about a test question on the PARCC — a standardized test based on the Common Core State Standards. The tweet had prompted a security breach, and the superintendent’s email expressed several concerns, not simply because a closely guarded test question had been made public. There was the potential, she feared, for parental outcry about student data and testing, particularly as the DOE had requested the student be disciplined. The incident, she added, had also made her aware that “Pearson is monitoring all social media during PARCC testing.” And Braun’s blog post made everyone aware of that.

(Also, just a sidenote: the superintendent’s email footer read “The best way to predict your future is to create it” — Abraham Lincoln. Sigh.)

The incident caused a great deal of furor, perhaps because it came at a moment of heightened political outcry about the Common Core State Standards and standardized testing. But it also occurred right as fears were growing about the social media monitoring of students — by schools and by corporations alike. It was easy — at the time, perhaps — to simply blame Pearson for the “spying” in order to protect the validity of its test. As the decade went on, the arguments for surveilling students’ social media have changed several times. But the “spying” has continued.

72. Chatbot Instructors

“Imagine Discovering That Your Teaching Assistant Really Is a Robot,” The Wall Street Journal gushed in 2016, documenting an experiment undertaken at Georgia Tech in which a chatbot called “Jill Watson” answered questions in a course’s online forum. “Students didn’t know their TA was a computer,” the university’s press release stated, seemingly unbothered about any ethical issues that might raise.

(Consent? It matters.)

Watson, as the name might suggest, was built using the IBM Watson “AI” technology. College of Computing Professor Ashok Goel and his team used the 40,000 some odd questions that students had asked in previous versions of the course to build out Jill’s knowledge-base. “One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn’t really go up,” Goel said. “Students tend to ask the same questions over and over again.” But instead of redesigning his course or materials so that students didn’t have to ask these questions, he built a chatbot and called it a TA.

It’s not just the teaching assistant labor that chatbots are seeking to replace. As Inside Higher Ed recently reported, they “have started to infiltrate every corner of higher ed -- from admissions to student affairs, career services and even test prep,” arguably making institutions that are struggling to treat people humanely even less human.

71. "Uber for Education"

“We want to be the Uber for Education,” Udacity founder Sebastian Thrun told the Financial Times in 2015. MOOCs are, no surprise, their own entry on this long list of awfulness. But the phrase “Uber for Education” deserves its own spot here, as the analogy was frequently invoked by education reformers, including Secretary of Education Betsy DeVos. She told the Brookings Institution in 2017 that “Just as the traditional taxi system revolted against ridesharing, so too does the education establishment feel threatened by the rise of school choice. In both cases, the entrenched status quo has resisted models that empower individuals.” School choice, for DeVos, is the Uber for education. An end to regulations would make for the Uber for education.

But for Thrun, the reference to Uber was not about “choice” but about labor — specifically about building a platform to be used by a precarious workforce, lured into piecework with the promise of a big payout. “At Udacity, we built an Uber-like platform,” he told the MIT Technology Review. “With Uber any normal person with a car can become a driver, and with Udacity now every person with a computer can become a global code reviewer.” The promise — whether working for Udacity or for Uber — included better flexibility, more pay. For the customer, the promise is for service on demand. But as a decade of the gig economy has demonstrated, all this is “being fueled by exploitation, not innovation.”

And, of course, many teachers find themselves working as Uber drivers as they struggle to make ends meet on their salaries. Indeed, Uber markets to teachers directly, encouraging them to make the ride-sharing company their second gig, offering teachers in some cities to donate a small percentage of each trip fee to their classroom.

Mourning Dove

70. Galvanic Skin Response Bracelets

In 2012, education historian Diane Ravitch sounded the alarm that the Gates Foundation had given Clemson University a grant “to work with members of the Measuring Effective Teachers (MET) team to measure engagement physiologically with Galvanic Skin Response (GSR) bracelets which will determine the feasibility and utility of using such devices regularly in schools with students and teachers.” (The grant award now reads“to conduct a pilot study of new ways to measure student engagement for use in research.”) “Shades of Brave New World,” Ravitch cautioned. Her readers poured through the Gates Foundation website and uncovered more examples of funding for experiments with the GSR bracelets. “I’m sorry,” Ravitch wrote on her blog. “I think this is madness. Is there a mad scientist or psychologist advising the Gates Foundation? Does Dr. Moreau work in a Gates laboratory in Seattle?”

Ravitch, once a staunch advocate for standardized testing-based reform efforts when she worked in the Bush Administration as Under Secretary of Education, had had a recent change of heart, publishing in 2010 a bestseller titled The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. Coming under particular scrutiny were what Ravitch called the “billionaire boys’ club,” wealthy philanthropists and investors like Gates who she charged with molding the public school system to serve their own interests. The bracelets were, for Ravitch and her readers, emblematic of a reform movement hell-bent on using any technology at their disposal to monitor (and punish) teachers. (For folks who mocked Ravitch and her readers for "hysteria" about these bracelets? You're sexist. Also, let me introduce you to other items on this list: brainwave monitoring, brain training, social emotional learning.)

69. "Unbundling"

In 2012, investor Michael Staton first published his framework for “unbundling” education, reworking it several times over the course of the next year or so. His framework outlined the services that he believed schools provide and the order in which technology will “disrupt” these. (Content delivery, he argued, would be the first to go.) Staton also published a paper for the American Enterprise Institute— a paper that, no surprise, drew heavily on Clayton Christensen’s theories of disruptive innovation and that predicted that “private sector is coming for education.” The Internet, Staton contended, would “unbundle” education just as it had “unbundled” other sectors — music, for example; and this would create a huge opportunity for entrepreneurs.

(Michael Staton, incidentally, has a starring role in Kevin Carey’s 2015 book The End of College, serving as a tour guide through the startup-filled world of Silicon Valley, giving Carey his first ride in an Uber, introducing him to the founder of the now-defunct coding school Dev Bootcamp, partying with him on an expensive yacht — all the key elements somehow of “the Future of Learning and the University of Everywhere.”)

Derek Newton argued in The Atlantic that Staton’s (and by extension Carey’s) analysis misconstrued what most parents and students want from schools. They want the bundle. They don’t want “content loops.” They aren’t shopping for “content pathways.” They want to choose a school. They want a degree.

And increasingly, college students need more bundling, not less. They need schools that can provide courses and certifications, sure. They need degrees that will help them find gainful, meaningful work. But they also need mental health services, food banks, child care. Unbundling, to borrow from Tressie McMillan Cottom, is a fantasy for “roaming autodidacts.”

68. Instructure

Instructure first announced its launch in 2011 in the pages of Techcrunch. Ah, the halcyon days of tech blogging, when Michael Arrington pronounced “Death to the Embargo!” — Techcrunch would not honor agreements with PR firms to hold back on stories because Techcrunch wanted to be f!rst!!111

And so the night before any of the education publications could publish their take on the new “upstart” learning management system, Arrington was the first to gush about… wait, let me check my notes here… the collection of vintage tanks and weapons owned by founder Josh Coates. Truly some stellar reporting, Mike. Truly.

Of course, the readers of Techcrunch care(d) little about education technology per se. Or rather, their interest wasn’t in the features of the new LMS. They wanted to hear about investment opportunities, about disruptive innovation, about the potential threat to Blackboard’s market dominance. What mattered to this crowd was the business of ed-tech, not the pedagogies.

And for a time, the business of Instructure was good. People were relieved that there was a new alternative to Blackboard. The company IPO’d in 2015 and finally surpassed Blackboard’s US market share in 2018. But it now ends the decade back where it started: trying to appease the money people. A move to sell Instructure to the private equity firm Thoma Bravo is facing resistance from three large investors.

67. UC Berkeley Deletes Its Online Lectures

In March 2017, UC Berkeley announced that it would remove public access to more than 20,000 audio- and video-recordings of lectures, in response to a Department of Justice order that the school make its educational materials accessible to people with disabilities. Cathy Koshland, the vice chancellor for undergraduate education, said Berkeley would continue to work with the MOOC provider edX to post open courses online. “Moving our content behind authentication allows us to better protect instructor intellectual property from ‘pirates’ who have reused content for personal profit without consent.” A weird claim, considering it was all openly-licensed.

UC Berkeley had been posting video and audio versions of lectures online for decades. And while much of this content pre-dated any interpretation of the ADA that could force schools to provide equitable access to these materials, even some of its more recent BerkeleyX material was still not ADA compliant.

To remove public access to content rather than make it accessible for people with disabilities was, to borrow a phrase from Wonkhe editor David Kernohan, “a very bad look.”

66. Slave Tetris

Yes. Slave Tetris. Someone did that. Someone made an educational video game called “Slave Trade”, thinking the practice of fitting Black bodies into a slave ship would be a great way to help children understand history. And what’s worse, it was hardly the only slavery “game” out there. The decade was littered with them — games built by publishers like Serious Games Interactive and games designed by teachers themselves. Fun.

Indian Fantail Pigeon

65. Apple’s iTextbooks

In October 2011, Walter Isaacson published his biography of Steve Jobs — just 19 days after the Apple co-founder’s death. Among its revelations, Jobs’s plans to transform the textbook industry.

He believed it was an $8 billion a year industry ripe for digital destruction. He was also struck by the fact that many schools, for security reasons, don’t have lockers, so kids have to lug a heavy backpack around. “The iPad would solve that,” he said. His idea was to hire great textbook writers to create digital versions, and make them a feature of the iPad. In addition, he held meetings with the major publishers, such as Pearson Education, about partnering with Apple. “The process by which states certify textbooks is corrupt,” he said. “But if we can make the textbooks free, and they come with the iPad, then they don’t have to be certified. The crappy economy at the state level will last for a decade, and we can give them an opportunity to circumvent that whole process and save money.” (p. 509)

In January 2012, Apple unveiled its plans for “digital destruction” — plans that looked nothing like what Isaacson had described. Apple would partner with Big Three publishers — Pearson, McGraw Hill, and Houghton Mifflin Harcourt — to make digital versions of some of their titles available for the iPad at a cost of no more than $14.99 apiece. And while that price tag might have seemed much lower than the hundreds of dollars for a print copy, those print versions — at the K-12 level at least — are often shared across classes and used for many years. Sure, the iTextbooks promised they would always be up-to-date and that each student would have their own, but $14.99 per student per year would be — according to one calculation at least — five times the cost of print. Furthermore, the textbook authoring tools built books in a proprietary format that would only work for iBooks and that gave Apple sole distribution rights. So despite all the talk of dismantling the textbook industry and its bureaucracy, Apple opted instead to partner with it — something that would come back to bite Apple hard when it struck a deal to sell iPads and digital textbooks to the Los Angeles Unified School District.

64. Alexa at School

“Do Voice Assistant Devices Have a Place in the Classroom?” Edsurge asked in 2018, opening its story with a tweet from privacy advocate Bill Fitzgerald that made it clear the answer was “No.” “In the classroom,” Fitzgerald reported an Amazon rep had told him, “Alexa and Dot posed compliance and privacy issues.” But that didn’t stop the publication then adding some 1300 more words that made the case, despite potential privacy and security issues, for educators saying “Yes.”

Since then, the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy, and a dozen other privacy groups have lodged a complaint with the FTC, claiming that Amazon’s voice assistants violate the Children’s Online Privacy Protection Act. Amazon is also being sued for recording children without their consent.

But rarely willing to refuse the latest shiny gadget, even when it’s so obviously creepy, some schools are moving forward with installing Alexa in classrooms and dorm rooms and offering students money to develop new products using the technology.

63. ConnectEDU

In 2014, the college and career planning startup ConnectEDU filed for bankruptcy, as it struggled to make good on its debts despite having raised some $61 million in venture funding. During the bankruptcy hearings, the FTC requested that the company not be allowed to sell the millions of student records in its possession — “students’ names, email addresses, birth dates and other intimate information” — as assets. The sale, the FTC contended, would violate ConnectEDU’s own privacy policy, and it requested that users be notified so they could request their data be destroyed.

But that never happened, as Education Week reported because “ConnectEDU had no employees as of the filing date” to carry out the task.

(To make matters worse, Buzzfeed noted, “One of the successful bidders on that data was a software company with an imperfect track record when it comes to privacy. The now-resigned CEO of Symplicity Corp., which powers universities' disciplinary record systems, pleaded guilty last month to conspiring to hack into the computer systems of competitors and steal their information. Along with the company's CTO, who will also plead guilty, he decrypted protected passwords and other private information of universities that used Symplicity's system. The company has not been charged in the matter.)

62. Edsurge

The publication Edsurge was founded shortly after I started Hack Education. I have raised zero dollars in venture capital; I’ve taken zero dollars in grants from the Gates Foundation or the Chan Zuckerberg Initiative. Edsurge raised over $8 million in venture capital and received over $7 million in Gates Foundation grants. Edsurge told its version of events for the past decade’s worth of ed-tech, and I told mine.

Edsurge struggled to find a business model over the years — it’s tried making money from sponsored content, a product catalog, conferences, a “concierge” service helping schools decide what product to buy, a research service helping investors decide what startups to fund, and so on. I struggled too, no doubt — although I didn’t have $15 million to help fund my fight.

This fall, Edsurge announced it would acquired by ISTE. Terms of the sale were not disclosed, but Edsurge said its investors would see nothing from the acquisition. But that doesn’t feel quite accurate. Over the past decade, its investors have gained quite a bit as the publication helped spread precisely the stories that the ed-tech industry wanted told.

61. Edmodo

For a time, Edmodo was quite the ed-tech industry darling. The social network for schools raised over $87 million from high profile tech investors like Linkedin founder Reid Hoffman. But the company could never find a good way to make money, experimenting with a “freemium” model, attempting to run a marketplace where teachers could buy and sell content, and even running ads (including ads for e-cigarettes). “Can Edmodo Turn Virality into Profitability?” Edsurge’s Tony Wan asked in 2016. The answer was a resounding “no.”

Founded by two educators in 2008, Edmodo experienced a number of leadership changes this decade. Crystal Hutter, the wife of one of the company’s investors, was CEO for a time. Then Vibhu Mittal took the reins. In 2018, Edmodo was sold to NetDragon for just $137.5 million (just $15 million of which was in cash) — an astounding loss.

The ability to make money was hardly the only problem that Edmodo faced. In 2017, news broke that some 77 million accounts had been compromised, and usernames, email addresses, and hashed passwords stolen.

Diamond Dove

60. The Death of Aaron Swartz

On January 6, 2011, Aaron Swartz was arrested by MIT police and charged with violating federal hacking laws for using the MIT network to download millions of academic articles from the subscription service JSTOR.

Although neither JSTOR nor MIT sought punishment, the federal government prosecuted Swartz anyway, charging him with violating JSTOR’s terms of service by downloading documents to distribute. He was initially charged with four felony counts, but prosecutors increased this to 13 counts, turning each day he’d downloaded documents into a separate count. As such, Swartz faced a maximum sentence of 50 years in prison and fines that could potentially reach $1 million. He was offered a plea deal in which he’d serve six months in prison, but he rejected it — he didn’t want to serve time and he didn’t think he’d committed a felony. Three months before his trial was set to begin, Aaron Swartz committed suicide, which friends and family blamed in part on the overzealous prosecution.

Information is power. But like all power, there are those who want to keep it for themselves. The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations. …There is no justice in following unjust laws. It’s time to come into the light and, in the grand tradition of civil disobedience, declare our opposition to this private theft of public culture. — Aaron Swartz, 2008

59. Clayton Christensen’s Predictions

In 2008, Clayton Christensen and Michael Horn published Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns and predicted that the growth in computer-based instruction would accelerate rapidly until, by 2019, half of all high school classes would be taught over the Internet. Nope. Wrong.

In 2013, Christensen told investor Mark Suster that, in 15 years time, half of US universities would be bankrupt. As with K-12 education, he believed (believes) that online education would provide the “disruptive innovation” to force traditional schooling out of business. In 2017, he doubled down on his prediction — half of colleges and universities would close in a decade. I have set a Calendar reminder for 2028. We can check back in then for the final calculation. Meanwhile, Phil Hill has run the numbers on what higher ed closures actually look like, with visualizations that helps underscore that the vast number of these were to for-profit institutions — and there just aren’t enough of those left to make up the “half of all colleges” claim.

As Jill Lepore reminded us in her scathing critique of Christensen’s “gospel of innovation,” “Disruptive innovation is a theory about why businesses fail. It’s not more than that. It doesn’t explain change. It’s not a law of nature. It’s an artifact of history, an idea, forged in time; it’s the manufacture of a moment of upsetting and edgy uncertainty. Transfixed by change, it’s blind to continuity. It makes a very poor prophet.” But it’s the sort of propheteering that hopes if you repeat a story enough times, that everyone — taxpayers, administrators, politicians, pundits — will start to believe it’s the truth.

58. Deborah Quazzo and the Chicago Board of Education

In 2014, an investigation by the Chicago Sun Times found that companies Chicago Board of Education member Deborah Quazzo had a financial stake in had seen the business they received from the city’s schools triple (to almost $4 million) since Mayor Rahm Emanuel had appointed her to the board the previous year. The newspaper also found that she had voted to support charter school networks that had strong financial ties to some of her investment portfolio. The Chicago Teachers Union called on Quazzo, the founder of GSV Advisors and a very active ed-tech investor, to immediately resign. However, she insisted there was no conflict of interest in serving on the CPS Board and investing in ed-tech companies, and she held on to her seat until her term expired, supported by Mayor Emanuel who said that the city was “lucky to have her.”

57. TurnItIn (and the Cheating Detection Racket)

iParadigms, the company whose product is most closely associated with cheating detection — TurnItIn — was bought and sold twice this decade: in 2014 to Insight Venture Partners for $752 million and earlier this year to Advance Publications for $1.75 billion. Thankfully, because education technology is such an ethical and mission-driven industry, every student who has ever been forced to submit an essay or assignment to TurnItIn received a nice cut of the deal, as befitting their contribution of data and intellectual property to the value of the product.

I jest. Students got nothing, of course. Students write, and then their work gets extracted by TurnItIn, which in turn sells access to a database of student work back to schools. Or as Jesse Stommel and Sean Michael Morris put it, “A funny thing happened on the way to academic integrity. Plagiarism detection software (PDS), like Turnitin, has seized control of student intellectual property. While students who use Turnitin are discouraged from copying other work, the company itself can strip mine and sell student work for profit.” (Another funny thing: essay mills are now touting that they use TurnItIn too, and they can assure their customers that their essays will pass the plagiarism detector.)

Rather than trusting students, rather than re-evaluating what assignments and assessments look like, schools have invested heavily in any number of technology “solutions” to cheating — keystroke locking, facial recognition, video monitoring, and the like, all designed to identify students with “low integrity.”

56. Brain Training

In 2016, Lumosity agreed to pay the FTC $2 million to settle charges of deceptive advertising surrounding its “brain training” games. Lumosity had claimed that its products could improve memory and focus and even reverse the symptoms of Alzheimer’s Disease — claims that were unsubstantiated at best. Criticism of Lumosity and other brain training companies had been circulating for a while. In 2014, more than 70 scientists penned an open letter stating that there was “little evidence” to support the industry’s claims. The industry, these scientists contended were simply capitalizing on people’s fears about aging. A 2017 study in the Journal of Neuroscience found no evidence that playing Lumosity games helped improve cognitive functioning. And another study published that same year in Neuropsychology Review found that most brain training programs had no peer-reviewed evidence demonstrating their efficacy.

Why, it’s almost as if, when it comes to education technologies, you can just say whatever you want in the marketing copy.

Lumosity was hardly the only brain training company to get in trouble this past decade. Neurocore, a “brain performance company” agreed to “stop advertising success rates for children and adults suffering from maladies such as attention deficit disorder, depression and autism after a review found the company could not support the outcomes it was promoting.” Among Neurocore’s investors: Secretary of Education Betsy DeVos.

Crested Pigeon

55. Montessori 2.0

I’d wager if you ask most Americans to describe “progressive education,” they’d cite one of two names in doing so: John Dewey and Maria Montessori. They’ve likely not read any Dewey — just see the phrases attributed to him on PowerPoint presentations and on edu-celebrity Twitter. And they know little about Montessori either, other than it’s a kind of preschool where kids play with wooden blocks. So not surprisingly, as tech executives sought to open their own, private schools, they have turned to a largely imagined legacy of progressive education, often referring to their experiments as “Montessori 2.0” — Sal Khan’s Lab School and Max Ventilla’s AltSchool, for example. They contend that their schools expand on Montessori’s vision by adding new digital technologies to “personalize learning,” as well as to surveil students. “It’s in this meticulous logging and data tracking that teacher-leaders see the most promise for technology,” as one Edsurge article described the Wildflower charter school chain. That story documented the wide array of cameras and sensors (the latter, tucked inside the students’ slippers) utilized to monitor what activities children undertake, what objects they interact with.

Many in Silicon Valley like to point to the Montessori experiences of famous founders like Larry Page, Sergei Brin, Jimmy Wales, and Jeff Bezos and suggest that their early childhood education helped make them creative thinkers — although it’s probably not fair to blame Montessori for the destructiveness to our democracy from Google and Amazon. It is fair, however, to blame “Montessori 2.0” for this new affiliation of so-called progressive education and surveillance technologies.

54. Google Reader

Google shut down Reader in 2013, citing declining use of RSS. Indeed, Internet users were already much more likely to get their news from social media feeds than from any feed reader (although let’s be honest, it probably wouldn’t have cost Google a dime to keep the server up and running). Arguably, the closure of Google Reader was not just a blow to news consumption; it was a blow to blogging as well. It became harder and harder for small, personal sites (like this one) to compete with the giant, venture-backed ones (like that one).

So anytime someone tries to tell you Google’s mission is “to organize the world's information and make it universally accessible and useful,” remind them that Google killed Google Reader and that in doing so, it’s helped make the world more susceptible to the mis- and dis-information that now thrives online.

53. The TED Talk

One can trace far too many bad ideas to the event whose focus purports to be on “ideas worth spreading”: TED. It’s not just that the talks are terrible, trite, and full of historically inaccurate information. Say, for example, Sal Khan’s 2011 TED Talk “Let’s Use Video to Reinvent Education.” But people hear those TED Talks and then think they’ve stumbled upon brilliance. Say, for example, Sebastian Thrun, listening to Sal Khan’s 2011 TED Talk and deciding he would redesign higher education— an idea that prompted an experiment at Stanford in which he offered his graduate level AI class online, for free. (You’ll never believe what happened next.)

Unfortunately, education-related TED Talks are some of the most popular ones out there. Some of these are laughably silly, such as Nicholas Negroponte’s prediction of a pill you will be able swallow to “know Shakespeare.” And some of the ones with the greatest appeal, such as Sugata Mitra’s “School in the Cloud,” may just re-inscribe the very exploitation and inequality that the TED Talks promise, with their 18-minute-long sleight-of-hand, to disrupt.

TED Talks are designed to be unassailable — ideas to spread but never challenge. As I noted back in 2013, “You don’t get to ask questions of a TED Talk. Even the $10,000 ticket to watch it live only gives you the privilege of a seat in the theater.”

52. Virtual Reality

Virtual reality was, once again, heralded as a technology poised to transform education. I say “once again” because virtual reality has long been associated with such promises. VR appeared in some of the earliest Horizon Reports for example, with the 2007 report positing that virtual worlds would be adopted by higher ed institutions within two to three years’ time; funnily enough, the 2016 report offered the same outlook: we’re still two to three years out from widespread adoption of VR.

Facebook acquired the VR headset maker Oculus Rift in 2014, and Facebook tried to push the equipment on schools, donating one headset to every school in Arkansas, for example.

Google also sought to market its version of VR to schools, offering “virtual field trips” with its Expeditions product. It’s not clear that the positive benefits of going on actual field trips extend to the experience of watching a 360 degree video via a device strapped to your face. We don’t believe that field trips are the equivalent of watching educational films in class, do we? So why is VR any different?

But according to the marketing hype, VR is different as it is a new and unique “empathy machine.” (Good grief. It is not.)

51. The Math Emporium(s)

The Math Emporium at Virginia Tech was founded in the late 1990s, but it saw renewed interest this decade as hundreds of other schools adopted the model: use computer-assisted instruction to teach math in lieu of paying professors or graduate students to do so. According to The Washington Post, “The Emporium is the Wal-Mart of higher education, a triumph in economy of scale and a glimpse at a possible future of computer-led learning. Eight thousand students a year take introductory math in a space that once housed a discount department store.”

Following the creation of the Math Emporium, pass rates for introductory math went up, Virginia Tech boasted. And the per-student expenditure went down. Wheee. But research also found that “the emporium appeared to best serve students with higher math achievement, who enjoyed mathematics, and who spent more time taking their exams.” The software appeared successful in helping students to recall formulas they’d been taught, but did little to develop broader mathematical or analytical reasoning. And students’ reactions to the emporium were mixed, at best.

“Our software will never replace teachers,” they always say. Until some decides, “To hell with it. Let’s save some money” and the software does.

Rock Pigeon

50. One Laptop Per Child

Nicholas Negroponte first started talking up his vision for a $100 laptop at the World Economic Forum in 2005. Inspired by the work of Seymour Papert, he believed that a cheap computer could unlock educational opportunities for children around the world. But as The New York Times’ Jon Markoff observed, Negroponte’s reception at Davos was chilly: “His mission has been complicated at Davos 2005 because the digital divide and the information technology industry are no longer the center of attention at this annual intimate gathering of the world's most powerful and wealthy.” There was slightly more buzz when Negroponte pitched the idea again at the World Summit on the Information Society in Tunis in 2005. But when he and the head of the UN Kofi Annan went to demo the device, its signature feature — a hand crank that would power it — fell off. As Adi Robertson wrote in The Verge, “The moment was brief, but it perfectly foreshadowed how critics would see One Laptop Per Child a few years later: as a flashy, clever, and idealistic project that shattered at its first brush with reality.”

The struggles faced by the One Laptop Per Child project have continued since its outset. (In 2009, Techcrunch named the laptop one of the biggest flops of that decade.) And here it is again, still flopping in this one.

In 2011, Negroponte announced he planned to drop the tablets out of helicopters down onto remote villages in Ethiopia. “When I say I drop out of the helicopters, I mean it... it's like a Coke bottle falling out of the sky,” he said, apparently referring to the movie The Gods Must Be Crazy. (He clearly missed the part where the elders in the film decide the Coke bottle is cursed.) The following year, Negroponte boasted that “within two weeks, [children] were singing ABC songs in the village, and within five months, they had hacked Android.” But a controlled study in Peru published in 2012 found no evidence that the OLPC tablets increased children’s math or language learning.

In early 2014, the OLPC Foundation disbanded, and OLPC News shut down. “Let us be honest with ourselves. The great excitement, energy, and enthusiasm that brought us together is gone,” its farewell post read. “OLPC is dead.”

(This year, Morgan G. Ames published The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child— so at least one good thing came out of the endeavor this decade.)

49. Yik Yak

Once valued at $400 million, having raised some $73.5 million in venture capital, the anonymous messaging app Yik Yak closed its doors in 2017. Good riddance.

Founded in 2014 by Tyler Droll and Brooks Buffington (seriously), Yik Yak was for a time quite popular with students, who too often took advantage of its anonymity to harass others and to post racist and sexist remarks. As The New York Times chronicled, threats of violence posted to the app prompted several schools to go on lockdown, and several students were subsequently arrested — in Virginia, Michigan, and Missouri, for example — for posts they’d made. Some schools blocked the app on their WiFi network; students at Emory denounced it as a platform for hate speech.

As one student noted, the hyper-localization of the app was a constant reminder that these threats were not coming from some random person across the country; they were coming from someone in your class.

48. The Hour of Code

During Computer Science Education Week in 2013, the industry-backed learn-to-code group Code.org launched “The Hour of Code,” a campaign to encourage more computer science with a series of short tutorials (and high profile testimonials) posted to its website. “Don’t just buy a new video game — make one,” President Obama urged in a video he recorded on behalf of the Hour of Code. “Don’t just download the latest app — help design it. Don’t just play on your phone — program. No one’s born a computer scientist, but with a little hard work — and some math and science — just about anyone can become one.” The tech industry trade press loved the story, repeating Code.org’s PR about, for example, the number of lines of code that students had written during the campaign or the number of girls who’d participated.

Code.org has raised over $60 million in funding from the tech industry giants, and its lessons are often highly branded: “You can now learn the basics of coding with Disney’s Moana.” There are serious problems with the industry’s command that “everyone should learn to code” — that’s a separate item on this list. But it’s worth noting here that one hour — whether an hour of code or a “genius hour” — is hardly a sufficient commitment to changing education or, for that matter, to changing industry. Indeed, in many ways, the Hour of Code is a marketing coup, a feel-good distraction from some of the more insidious lobbying efforts the tech sector has undertaken to reshape K-12 curriculum.

47. Brainwave Headbands

“Brainwave Headsets Are Making Their Way Into Classrooms — For Meditation and Discipline,” Edsurge wrote in 2017, touting the use of the Muse headset to measure electroencephalography (EEG) and reduce behavioral problems. It wrote about a different headset maker that year too — BrainCo — which purports to use EEG to gauge students’ attention levels in the classroom. This year, Edsurge featured a headset made by EMOTIV in yet another story that suggested that these technologies would help improve “mindfulness”: “I Moved a Drone With My Mind. Soon Your Students Will Too.”

Most of this is, no doubt, speculation. It’s marketing. But it is an indication that plenty of companies (and schools) see students’ neural data as a potential way to monitor and control their behaviors, their emotions, and their cognition.

46. Compulsory Fitness Trackers

In 2016, Oral Roberts University announced that students would be encouraged to wear Fitbits to monitor their physical activity. The fitness trackers would help support the university’s model of “Whole Person Education,” it said. The FitBit data — number of steps, heart rate data — would be integrated into the student grade-book in the school’s learning management system (in this case, D2L’s Brightspace). Students were required take 10,000 steps a day and check their heart rate three times a day — arbitrary fitness goals, to be sure. A school spokesperson told Vice that it would not track any other data that the FitBits collected — such as the type of physical activity undertaken (including sex, which is forbidden at the university). And he told Vice that students could always opt out of wearing the device and just pen and paper to track their activity — but no one had.

The students most likely to have all sorts of their fitness activity data tracked and monitored (and even monetized) and given no choice in doing so are, no surprise, athletes. The University of Michigan, for its part, struck a new contract with Nike for sports apparel in 2016 and included in the deal was the opportunity for Nike“to harvest personal data from Michigan athletes through the use of wearable technology like heart-rate monitors, GPS trackers and other devices that log myriad biological activities.”

Of course, students needn’t wear fitness trackers for their schools to track their every step. As The Washington Post pointed out, many colleges just track students via their phones.

Common Wood Pigeon

45. WeGrow and WeWork

The last half of WeWork’s 2019 was an unmitigated disaster. The co-working company filed for an IPO in August, having last been valued at $47 billion. The IPO filing gave investors an opportunity to dig through the company’s financials which were shockingly bad. Co-founder and CEO Adam Neumann was forced to step down, the public offering was delayed, and one of its investors had to pony up another $5 billion to help cover immediate bills (including severance pay for laid off employees).

Although its main business appeared to be buying and leasing real estate, WeWork had several education-oriented subsidiaries too, one of which MissionU closed its doors last year. (MissionU had promised a one year program that would replace a traditional college degree.) WeWork had acquired the Flatiron School back in 2017, and it laid off dozens of the coding bootcamp’s employees this year. WeWork also announced it would close WeGrow, its “conscious entrepreneurial school” for children (a school that apparently ran with the same sort of chaotic disruption that Neumann embraced as the head of WeWork).

Just remember, kids. No matter what the PR tries to tell you, it’s likely that “unicorn” is more of an ass.

44. YouTube, the New "Educational TV"

You can learn anything on YouTube, we’ve been told. Indeed, young people prefer learning from YouTube than from textbooks — according, ironically, to Pearson. Although YouTube was founded in 2005, it has seen an explosion in growth this decade, in part from the ubiquity of mobile devices: anytime, anywhere television-watching.

The popularity of sites like Khan Academy helped fuel the narrative that video could teach anything and that YouTube would “reinvent education” — funny, as Thomas Edison thought the same thing about film back in 1922. But as the decade progressed, it seemed clear that the algorithmically-driven content on the video-sharing site was introducing viewers to conspiracy theories and misinformation. “YouTube, the Great Radicalizer,” as sociologist Zeynep Tufekci put it.

Parents have long been criticized for letting television “raise their children,” But YouTube now means a much stranger and potentially more dangerous, data-driven viewing experience.

43. Juul

“Juul’s not ed-tech, Audrey.” Right right right. The e-cigarette maker is pretty popular with “the kids.” And schools are having to revamp their anti-smoking policies and lessons in order to account for the rise of the popular new vaping technology. Of course, some schools actually decided to use Juul’s own anti-vaping curriculum. What the hell was appeal? Well, in part, Juul framed its vaping prevention program in terms of “mindfulness” — all the rage thanks to the hype around “social emotional learning.” (See below.) But Juul sweetened the pot too. According to Buzzfeed, Juul offered to pay schools $20,000 to use its materials, which blamed vaping on peer pressure not on, you know, the addictive properties of nicotine. Those materials dramatically downplayed the dangers of vaping — dangers that have become much clearer this year with a growing number of illness and deaths associated with the practice. Even better: some of those materials were apparently plagiarized, according to Stanford professor Bonnie Halpern-Felsher at least, who claims that Juul used her slides without her permission.

42. “Social Emotional Learning” Software

While the brainwave monitoring headsets and “galvanic skin response bracelets” might seem too far-fetched and too futuristic and too creepy for schools to adopt, they are being positioned as part of another trend that might soon make them seem far more palatable: “social emotional learning” or SEL.

SEL has powerful advocates in powerful places: MacArthur Foundation “Genius” and University of Pennsylvania psychology professor Angela Duckworth. Stanford University psychology professor Carol Dweck. The Chan Zuckerberg Initiative, whose vision for the future of education involves “whole child personalized learning” and who says it plans to invest hundreds of millions of dollars into education initiatives to that end. No surprise, investment in software that encourages and measures “grit,” “growth mindsets,” and “mindfulness” has followed.

For its part, the OECD, the organization that administers the PISA test, has started surveying 10- to 15-year-olds’ social and emotional skills too. Because the thing that some people insisted was going to mitigate our maniacal focus on standardized achievement tests will now become its own standardized achievement test.

41. The K-12 Cyber Incident Map

To be clear, the K-12 Cyber Incident Map isn’t on this list because it is a bad idea. To the contrary, the map, which documents the 700+ cybersecurity-related incidents in K-12 schools since 2016 — ransomware attacks, DDoS attacks, data breaches, phishing attacks, and the like — is a necessary idea, as it underscores the utterly dismal state of cybersecurity in education.

Run by EdTech Strategies’ Doug Levin, who has created an accompanying cybersecurity resource center, the map should prompt districts to ask why they are collecting so much data, why they are buying software that collects so much data, and what kind of capacities and policies are they building internally to address what is clearly going to be an ongoing issue for all schools.

Nicobar Pigeon

40. IBM Watson

In 2011, Watson, a question-and-answer system built by IBM, beat two human players, Ken Jennings and Brad Rutter, on the TV game show Jeopardy! It was a brilliant PR move by IBM, which then sought to commercialize the technology for use in a number of fields, including medicine and education. IBM boasted a partnership with the Sesame Workshop to build “cognitive learning” apps. It boasted a partnership with Scholastic and one with Edmodo to build content recommendation services.

Perhaps the fiercest critic of IBM Watson in education came from another AI pioneer in the field, Roger Schank. Schank blogged regularly and angrily about Watson, insisting that IBM was not engaged “cognitive computing” — Schank published his book The Cognitive Computer in 1984 — and suggesting the company’s claims were “fraudulent.”

Despite all the marketing — ads featuring Bob Dylan, Big Bird, Grover— there has been little substance to demonstrate that Watson works. n other sectors, outside of education, results have been troubling at best. Researchers working with Watson’s medical application found, for example, “multiple examples of unsafe and incorrect treatment recommendations.” “This product is a piece of shit,” one doctor said. Not that’ll stop education from adopting it, of course.

39. Knewton

Founded in 2008 by a former Kaplan executive Jose Ferreira, Knewton was one of the most heavily funded ed-tech startups of the decade. It was also, in the words of Michael Feldstein, “snake oil.”

Ferreira boasted at a Department of Education “Datapalooza” in 2012 that “We literally know everything about what you know and how you learn best, everything. Because we have five orders of magnitude more data about you than Google has. We literally have more data about our students than any company has about anybody else about anything, and it’s not even close.” He told NPR in 2015 that Knewton’s adaptive learning software was a “mind-reading robo tutor in the sky.” But these outlandish claims of a powerful piece of learning software never matched what materialized. Knewton, having raised over $180 million in venture capital, was sold for just $17 million this year.

But like I’ve said elsewhere in this long list: there are no consequences for snake oil salesmen in ed-tech. Ferreira’s new startup was just featured in a glowing New York Times article promising, once again, that machine learning and AI are going to revolutionize education.

38. Coding Bootcamps

It’s almost impossible to separate Silicon Valley’s massive interest in (and investment in) coding bootcamps from other education trends this decade: efforts to challenge traditional credentialing systems, the tech industry’s push to reshape school curriculum to include coding, fears about a “skills gap,” and the morphing of a for-profit higher education system facing increasing regulatory scrutiny from the Obama Administration.

Coding bootcamps offer short, intensive courses in computer programming. While these are often marketed as alternatives to a traditional college degree, the vast majority of students who enroll at bootcamps appear to already have Bachelor’s degrees.

The bootcamps promise successful job placement — indeed some say they will refund tuition if a graduate does not find employment — but a feature published in Bloomberg in 2016 warned prospective students, “Want a Job in Silicon Valley? Keep Away From Coding Schools.” The article contended that many companies have found coding bootcamp grads unprepared for technical work: “These tech bootcamps are a freaking joke,” one tech recruiter told the publication. “My clients are looking for a solid CS degree from a reputable university or relevant work experience.” Google’s director of education echoed this sentiment: “Our experience has found that most graduates from these programs are not quite prepared for software engineering roles at Google without additional training or previous programming roles in the industry.”

There were several high profile closures of bootcamps — Dev Bootcamp, owned by Kaplan Inc., and Iron Yard, owned by the parent company of the University of Phoenix, for example — and it’s hard to not see these closures alongside those that swept the for-profit “career college” sector this decade. When Dev Bootcamp announced it was closing in 2017, the company admitted that it had been “unable to find a sustainable model” that didn't compromise its vision for “high-quality, immersive coding training that is broadly accessible to a diverse population.” Indeed, despite the tech industry’s disdain for the education system and particularly for the politics of its (unionized) labor force, “high-quality, immersive coding training” is an expensive, labor-intensive proposition. The for-profit higher education industry has never been known to invest heavily in instruction (faculty or curriculum); its dollars — primarily federal financial aid dollars at that — have gone instead to marketing and recruitment.

37. Ed-Tech Payola

In 2011, The New York Times’ Michael Winerip reported that, “In recent years, the Pearson Foundation has paid to send state education commissioners to meet with their international counterparts in London, Helsinki, Singapore and, just last week, Rio de Janeiro. The commissioners stay in expensive hotels, like the Mandarin Oriental in Singapore. They spend several days meeting with educators in these places. They also meet with top executives from the commercial side of Pearson, which is one of the biggest education companies in the world, selling standardized tests, packaged curriculums and Prentice Hall textbooks.”

It appeared as though these commissioners were in violation of state ethics laws, and the New York State Attorney General launched an investigation into the foundation. In 2013, Pearson paid $7.7 million in fines to settle the accusations that it broke New York state law, not only by paying for education officials’ travel but by using its foundation to help develop its Common Core-aligned products (hoping that these products would help it land more funding from the Gates Foundation). In 2014, Pearson closed its foundation.

And that was the end of pay-for-play in the education technology world. I’m kidding. It wasn’t.

There were numerous other high profile cases this decade of administrators accepting kickbacks from education and education technology companies. In 2014, the Speaker of the House in Alabama was arrested for using his office for personal gain, having taken some $90,000 from the ed-tech company Edgenuity “to open doors.” In 2017, The New York Times’ Natasha Singer described the lucrative wooing as precisely “how Silicon Valley plans to conquer the classroom.” And on the heels of her investigation, one of the figures in her story, Baltimore County Schools Superintendent Dallas Dance pleaded guilty to perjury relating to $147,000 in payments he’d received for speaking and consulting.

36. "Personalized Learning" Software (and Facebook and Summit Public Schools)

The role that venture philanthropy has played in attempting to reshape the American education system is near the top of this list of bad ideas. But a special shout-out goes to Mark Zuckerberg here and the Chan Zuckerberg Initiative’s wretched commitment to extend “personalization” beyond Facebook and into schools.

One of the main beneficiaries of CZI funding has been Summit Public Schools, which according to The New York Times, has received almost $100 million in grants from the venture philanthropy firm since 2016. The charter school chain was already developing a personalized learning platform to manage its competency-based learning practices when Facebook stepped in, offering to help the software it called Basecamp. (CZI took over the engineering of the learning management system from Facebook in 2017.)

Although the software received glowing praise from some tech journalists, there’s never been any research to demonstrate that Summit’s personalized learning software is actually effective. Not that that stops folks from claiming otherwise, of course. And the response in some communities to the introduction of Basecamp has been far from positive. Students and parents protested its adoption in Brooklyn NY, in McPherson KS, in Cheshire CT, in Indiana PA, and elsewhere. Some students complained that they weren’t learning anything; others said they were concerned about their privacy. Many schools pulled the plug on the software in response. According to data obtained by Chalkbeat, “Since the platform was made available, 18% of schools using it in a given year had quit using it a year later.“

Socorro Dove

35. Pearson's "Digital Transformation"

It hasn’t been a great decade for Pearson. This list is littered with Pearson scandals — spying on students, investing in Knewton, testing blunders, the Pearson Foundation payola scheme, the LAUSD iPad debacle, a redesigned GED.

Pearson’s business suffered — as did many publishers in this decade, to be fair, so we can’t actually say it’s because of these repeated fiascos. In an attempt at “digital transformation,” Pearson opted to sell off a number of high profile assets, including The Financial Timesits stake in The Economist Group, and its stake in Penguin Random House. Nonetheless, in 2017, it posted the biggest loss in its history: £2.6 billion for 2016. It continued to reorganize and lay off staff, selling off its US K-12 courseware business earlier this year.

Pearson announced in July that it planned to abandon its traditional textbook publishing model, opting instead for a “digital first” strategy. Instead of selling students an expensive print textbook, it will rent them an expensive digital one. Brilliant.

34. The LA Times Rates Teachers

In 2010, The Los Angeles Times made the controversial decision to publish test score data for more than 6000 public school teachers in the district, assigning them a rating indicating how effective they were in improving student performance — that is, their “value-added” measure (VAM). The rating system was one supported by the Obama Administration, as well as many education reform advocates, who contended that this sort of data-driven evaluation would help ineffective teachers improve. (Or, just as likely, could be used to strong-arm them to leave the profession. And indeed, LAUSD experienced a spike in teacher turnover after The LA Times published teachers’ names and ratings.)

There were (and still are) plenty of concerns about the validity of the VAM data — did the change year-to-year in standardized test scores really reflect what one teacher did or did not do? And did the VAM data help address educational inequality, making sure that “good” teachers were not simply assigned to “good” neighborhoods. (Indeed, some subsequent research found that the publication of the teachers names might have actually exacerbated inequalities in the district as more well-connected parents were able to pull strings to make sure their children were in the classrooms of those teachers deemed “the best.”)

The publication of teachers’ names and ratings was one of the first salvos in what we saw throughout the rest of the decade: in the name of transparency, public data was weaponized. And it was weaponized not so much because “sunlight is the best disinfectant” but because powerful forces wanted to “disinfect” the public sphere of their perceived enemies, teachers.

33. Online Credit Recovery

In 2017, Slate published a series of stories on online credit recovery programs — classes offered to students who needed to quickly make up missed credits in order to graduate high school on time: “The New Diploma Mills,” it called them. These classes were “fast, isolating, and superficial,” the series reported, describing the experiences of one student who, after failing her junior year English class, was able to make up the course online in 2 days. It told of another student who paid his friend $200 to take a course for him.

Schools are pressured to use these online options — products offered by companies like Apex, Edmentum, Odysseyware, and Edgenuity— in part because of their desire to boost graduation rates. And efforts to crack down on providers offering “bad online courses” were often countered by lobbying from the American Legislative Exchange Council (ALEC), which “has made expanding online learning — unfettered and in all of its forms — one of its priorities.”

32. Common Core State Standards

I haven’t put the Common Core State Standards on this list because I think the content of the standards was bad. I don’t have a horse in that race. That being said, I don’t think it’s a stretch to call the rollout of the Common Core one of the major education disasters of the decade. I include the CCSS here because of the vast influence it had on schools prompting schools to buy new “Common Core-aligned” technology products— hardware and software and digital curriculum and assessments. The Common Core is implicated in several other failures on this list too: the new GED and inBloom, for starters

31. The Gentrification of Sesame Street

Sesame Street turned 50 this year — one of the greatest successes in the history of education technology, reaching millions and millions of early learners on public television.

In 2015, Sesame Workshop announced that new episodes of the beloved show would appear first on the premium cable channel HBO, and then nine months later on public television. Although the organization said it needed to make the switch for financial reasons, the move was a blow to educational equity — the core mission of Sesame Street. It was, as Siva Vaidhyanathan put it, “a symptom of how Americans view our collective obligations to each other — especially to our poorest children.”

Sesame Workshop also got into the venture capitalism business in 2016 to invest in “‘mission-aligned’ education technology startups” (including test prep companies) and partnered with IBM Watson in 2017.

“New money has ruined Sesame Street,” The Guardian argued. "In its new format the show’s theme tune is a little brighter and the street scene a little ritzier than one remembers from earlier versions.” Elmo has a new apartment. Oscar the Grouch no longer lives in a trash can. The puppets that are featured the most are the ones with the best-selling product lines. And then there are the humans. Some of those humans — those most dearly beloved humans of Sesame Street, Gordon, Luis, and Bob — were fired (and then brought back due to the uproar).

What seems to drive the programming on Sesame Street now isn’t education research; it’s market research. The Sesame Workshop is no longer committed to “equity” as in social justice; it’s interested in “equity” as in the financial stake a venture capitalist takes in a tech startup.

Pink Headed Fruit Dove

30. "Precision Education"

“In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years,” sociologist Ben Williamson wrote in 2018. “Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.” But “precision education” does not simply rely on the assessment or behavioral data that feeds most learning software algorithms. It seeks to include in its analysis “intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.”

This is part of a “new genetics of intelligence,” according to behavioral geneticist Robert Plomin, who contends that DNA analysis could provide every student with a “learning chip” that could reliably predict their academic strengths and weaknesses and that could help schools customize — “personalize” — their education.

In 2018, a study published in Nature Genomics said that after studying the genomes of over 1 million people of European descent, it had found that “educational attainment is moderately heritable and an important correlate of many social, economic and health outcomes, and is therefore an important focus in a number of educational genetics studies.” (To be clear, as Ed Yong wrote in The Atlantic, “The team hasn’t discovered ‘genes for education.’”) Although the researchers tried to be very careful in explaining what they had discovered (publishing a FAQ longer than the research paper itself), it’s no surprise that their findings were quickly politicized: “Why Progressives Should Embrace the Genetics of Education” read one op-ed piece in The New York Times, forgetting perhaps that education’s progressives (including Maria Montessori) have been down that path before.

29. Student Loan Startups

Among the most popular areas in education for venture capitalists to invest this decade were student loan (and other financial aid-oriented) startups. I mean, if there’s a student loan crisis surely the right thing to do is get into the business of giving out more loans. (Among Secretary of Education Betsy DeVos’s investments: a student loan collection agency.)

Why were student loan companies so popular among investors and entrepreneurs? In part, it reflected the state of the banking industry overall right now: personal loans, commercial loans, industrial loans have boomed. Investor interest in loan companies also reflects the promise of new technologies — namely, AI and analytics — that have led both established financial institutions as well as upstarts to believe that loans will continue to be a lucrative market. (One loan startup that raised $200 million from Credit Suisse in 2017 boasted no human decision-making in its application process, for example. It’s all machine learning and algorithms. Gee, what could go wrong?!)

In 2017, the loan industry, coding bootcamps, and their investors started pushing a new type of loan: income-sharing agreements. Income-sharing agreements typically involve some sort of deferred tuition in exchange for a cut of a student’s future income. (Schools offering ISAs included Purdue and Lackawanna College.) As the student loan industry moves towards a more pervasive data-mining of students’ history in order to identify who has the “best potential” for repayment, these sorts of new funding agreements raise all sorts of questions about both overt and algorithmic discrimination. Indeed, Sara Goldrick-Rab called ISAs a “dangerous trend” as they move the risk of financial aid and student loans onto individuals’ backs (rather than the shared risk of public funding), and as such are steps towards privatizing funding for education.

Of course, those are exactly the reasons why many are hoping student loan and financial aid startups will be profitable investments.

28. Computer-Based Testing Blunders

The Common Core State Standards mandated that its new standardized tests be administered via computers, prompting as I noted above, a great deal of expenditure on new devices. But the new mode of testing caused a great deal of headache too. 2015 marked the year that states implemented the tests associated with the standards, and many struggled with that very implementation. That’s not a surprise, as a COSN survey that year found educators pleading that “We’re Not Ready for Online Tests.”

There were worries about how the move to online testing might affect scores. There were worries that there weren’t enough computers for test-taking. There were worries about kids’ abilities to type. There were worries about Internet connectivity, particularly in poor and rural schools. And many of the worries were well-founded. The problems weren’t just with district infrastructure; there were problems with vendors’ websites. There were technical problems in Minnesota, Florida, Colorado, New Jersey, Nevada, North Dakota, Montana, Tennessee, and elsewhere. Some states blamed the vendors and some blamed school IT and some states blamedhackers.” The Common Core tests in one New Jersey school district were postponed as its entire computer network was held hostage by ransomware in exchange for 500 Bitcoins, approximately $124,000 at the time.

The technical problems were so bad in some states that they prompted concerns about the validity of the scores and about states’ ability to meet the federally-mandated levels of test participation (numbers that exacerbated too by the “Opt Out” movement). Several statesaccused their testing vendors of breach of contract, and several switchedvendors, hoping that would fix the problem.

Spoiler alert: it did not.

27. Online Preschool

Arguably one of the best investments we could make in education would be to fund universal preschool. High quality universal preschool, staffed with well-paid professionals. High quality universal preschool, staffed with well-paid professionals in a brick-and-mortar setting and not online. But online preschool, according to some education reformers, is poised to “change the way rural America does early education.” One program, Upstart, “mixes adaptive software for preschool-aged kids, along with child development training and check-ins for parents. The program focuses on building pre-literacy skills, such as sound blending and letter names, and is designed to be used 15 minutes a day for a total of 75 minutes per week. Currently Upstart is used by about 19,000 families in Utah and in smaller pilots in about 15 other states,” Edsurge reported earlier this year. Many experts have caution against these programs, arguing for example that“Online ‘preschool’ lacks the concrete, hands-on social, emotional and intellectual educational components that are essential for quality learning in the early years. Further, online preschools are likely to exacerbate already existing inequalities in early education by giving low-income children superficial exposure to rote skills and ideas while more privileged children continue to receive developmentally sound experiences that provide a solid foundation for later academic success.”

These sorts of stories that promote high tech “solutions” for certain communities always make me wonder why we don’t opt instead to build out local capacity rather than insisting that the only alternative is education-from-elsewhere online.

26. Google Glass

Google announced “Project Glass” in 2012, and The New York Times’ Nick Bilton described them as “a very polished and well-designed pair of wrap-around glasses with a clear display that sits above the eye. The glasses can stream information to the lenses and allow the wearer to send and receive messages through voice commands. There is also a built-in camera to record video and take pictures.” But the tech press’s initial fondness for the futuristic design quickly faded: “I, Glasshole: My Year With Google Glass,” Mat Honan wrote in 2013.

Education technology blogs also gushed about Glass, churning out hundreds of articles like “30 Ways Google Glass In Education Might Work” and “14 Google Glass Innovative Uses In Education.” Adam Bellow delivered the closing keynote at ISTE 2013 wearing Google Glass. Ed-tech evangelists made it clear, from the outset, that surveillance was the future.

From the outset, there were major concerns about privacy and security with Glass, as well as problems with accessibility. In 2015, Google had announced it would stop selling the product. Bonus points, however, if you’re an education guru who’s Twitter profile still has you wearing Glass.

Kereru

25. Peter Thiel

On stage at Techcrunch’s annual conference Disrupt, investor Peter Thiel announced a new initiative, the “20 Under 20” program in which he would pay a select group of young people $100,000 to drop out of school and pursue their entrepreneurial interests. It was a political PR move that coincided with his larger thesis: we are in a “college bubble,” which like the housing bubble, finds people making unreasonable investments in something that is just not worth it.

If that was all that Peter Thiel did or said — stirred up the public’s resentment towards the expansion of higher education with some quaint story about talent and meritocracy — he’d probably be much lower on this list. (Here’s one look at what some of the past Thiel Fellowship recipients are up to. Yawn.) But Thiel has worked actively this decade to subvert democracy — he’s never been a fan apparently — bankrolling an attack on the First Amendment, for example, endorsing Donald Trump for President, and sitting on the Board of Facebook.

24. The Secretaries of Education

How funny that two of the longest serving members of both the Obama and the Trump Administrations were their Secretaries of Education: Arne Duncan and Betsy DeVos, respectively. Actually, it’s not funny at all. They were both pretty terrible. Beloved by VCs. (Indeed, since leaving office Duncan became one.) And beloved by ed reform types. But terrible. Among the ed-tech disasters they facilitated: a broken FAFSA, the floundering EQUIP experiment, andterrible FERPA enforcement, for starters.

More generally, the Department of Education has shown how tightly connected it is to industry. Former department officials have cycled in and out of the Gates Foundation, the Chan Zuckerberg Initiative, ISTE, the student loan industry, startups, and the like.

23. OPMs and Outsourcing

Despite all the hullabaloo this decade about the importance of offering online classes, many schools opted not to develop the capacity internally to do so, but rather to outsource that function to a vendor. In 2017, The Century Foundation issued a blistering report on these online program managers (OPMs) titled “The Private Side of Public Higher Education.” Although universities have long outsourced some of their services to third-party providers — food services and parking enforcement, for example — OPMs could pose significant risks to higher ed, the report argued, as “the functions of OPMs are closely linked to the core educational mission of these public institutions. As a result, the quality of the services provided by OPMs has a direct bearing on the quality of the school itself and the ability of these institutions to fulfill their mission to train students and prepare them for the workforce.”

The involvement of OPMs in the establishment and growth of online educational opportunities at public institutions exposes consumers to the financial interests of decision-makers, interests that would not exist if exclusively public or nonprofit institutions were involved in providing these distance learning programs. Driven by the desire and need to make money for investors or owners, those to whom executives are held accountable, these companies may prioritize profit over the interests of online students, to whom they owe no loyalty, financial or otherwise.

As Phil Hill observed, as for-profit universities and MOOC providers stumbled this decade, several sought to become OPMs instead. (That is to say, there are really just a handful of pivots you can make in ed-tech: become an OPM or become an LMS or try to market yourself as both.)

22. Automated Essay Grading

Robot essay graders — they grade just the same as human ones. Or at least that was the conclusion of a 2012 study conducted by University of Akron's Dean of the College of Education Mark Shermis and Kaggle data scientist Ben Hamner. The researchers examined some 22,000 essays administered to junior and high school level students as part of their states’ standardized testing process, comparing the grades given by human graders and those given by automated grading software. They found that “overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre.”

“The demonstration showed conclusively that automated essay scoring systems are fast, accurate, and cost effective,” said Tom Vander Ark, partner at the investment firm Learn Capital, in a press release touting the study’s results. (It did not.)

When edX announced it had developed an automated essay grading program, its president Anant Agarwal boasted that the software was an improvement over traditional grading methods. “There is a huge value in learning with instant feedback,” he told The New York Times. (There is not.)

Automated essay grading software can be fooled with gibberish, as MIT’s Les Perelman has shown again and again. Moreover, the algorithms underpinning them are biased, particularly against Black students. But that hasn’t stopped states from adopting automated grading systems for standardized testing and software companies from incorporating automated grading systemsinto their products.

21. Rocketship Education

In 2012, PBS News Hour’s John Merrow asked“Can Rocketship Launch a Fleet of Successful, Mass-Produced Schools?” Betteridge’s Law of Headlines would tell us the answer is “no.” But many in Silicon Valley thought that Rocketship’s model — a charter school chain that relies heavily on computer-based instruction — would be the model for future schools, and the organization raised money from the Gates Foundation, Netflix CEO Reed Hastings, and Facebook’s Sheryl Sandberg (for starters).

One of the key features of Rocketship was its Learning Labs, a cost-saving way to quickly scale that required“students to spend significant time working on laptops in large groups supervised by noncertified, lower-paid ‘instructional lab specialists.’” "Students rotating into Learning Labs meant employing fewer teachers," Richard Whitmire wrote in a book the charter school chain. "Thus a school such as Rocketship Mosaic could successfully serve 630 students with only 6 teachers plus aides." That News Hour documentary shows young, mostly Latino students, clearly bored and frustrated, sitting in row after row after row of cubicles, supposedly working (alone) on some computer program. It’s probably the scene of children using ed-tech that has stuck with me the most this decade.

According to NPR’s Anya Kamenetz, who wrote a detailed article on some of the major drawbacks to the Rocketship model, the emphasis on maximizing the instructional time on computers had led some schools to limit the bathroom breaks that students were allowed to take. One teacher reported that “Bathroom breaks are a problem because the kids go there to play, because they only have one 20-minute play period during the day." Other teachers said that bathroom accidents were common.

Although Rocketship founder John Danner hoped that the charter school chain would serve one million students by 2020, his goal has since been significantly revised. There are less than 20 schools in operation today, as expansion has faced growing resistance. (Danner left Rocketship to start another ed-tech company in 2013.)

Blue Headed Quail Dove

20. Predictive Analytics

“Should big data be used to discourage poor students from university?” ZDNet asked in 2017, describing an algorithm that could help predict whether or not low-income students would be successful at school — but not so more resources could be directed their way to help them succeed. Not so that they received more money or more support. Nope. Rather, could big data be utilized so these students were discouraged from going to school in the first place?

For a long time, stories about predictive analytics in education weren’t framed that way, no surprise. Education technology proponents were wont to say that big data will be used to encourage low-income or minority students –- or at least to send them nudges and notifications when students appear, algorithmically at least, to be struggling. These algorithmic products were marketed as helping students succeed and — no surprise — helping schools make more money by retaining their tuition dollars. But as the years have past, it has become clear that many schools are doing precisely what ZDNet had alluded to: using predictive analytics to recruit certain students and to avoid others altogether, exacerbating existing educational inequalities.

19. Platforming Education

In his review of Nick Srnicek’s book Platform Capitalism, John Hermann writes,

Platforms are, in a sense, capitalism distilled to its essence. They are proudly experimental and maximally consequential, prone to creating externalities and especially disinclined to address or even acknowledge what happens beyond their rising walls. And accordingly, platforms are the underlying trend that ties together popular narratives about technology and the economy in general. Platforms provide the substructure for the “gig economy” and the “sharing economy”; they’re the economic engine of social media; they’re the architecture of the “attention economy” and the inspiration for claims about the “end of ownership.”

In his book, Srnicek posits that platforms are poised to become the fundamental business model of our digital world — key to the new economy, clearly, but also key to political and social systems (and how these will be shaped under the control of the powerful technology industry). And certainly the goal for many technology companies — education and otherwise — has been to become a platform: that is, to become a key piece of digital infrastructure whose business model rests on the extraction of data.

In many ways, the learning management system is the prototypical education platform. The LMS has long positioned itself as an “operating system,” of sorts, for higher education, one that certainly predates any talk of a “platform economy.” But LMS providers spent the decade buying up a vastnumber of othercompanies so as to extend the functionality of their original product — companies that offered offered administratively adjacent features and would facilitate the extraction of more data from students’ and professors’ activities online than just what could be gleaned from the student information system.

(The LMS isn’t the only platform in education to be sure. Perhaps the most dominant and dangerous is Google — which nears the top of this list.)

18. Bridge International Academies

Bridge International Academies was founded in 2007, promising to bring quality education to parts of the Global South. Its schools are technology-intensive — that means scripted lessons for teachers to read via mobile device, the tracking of teacher and student attendance via mobile device, the payment of tuition by parents and the payment of teacher salaries via mobile device, for example. Bridge has received financial backing from Silicon Valley investors and executives, including Mark Zuckerberg and Bill Gates. “It’s the Tesla of education companies,” investor Whitney Tilson told journalist Peg Tyre. (I guess that’s supposed to be a compliment and not an admission that the startup suffers from exploitative, racist labor practices.)

Tyre’s investigation into Bridge International Academies chronicled the problems that the startup faced as it tried to scale: opposition from teachers’ unions, regulatory hurdles, inspections that found unsanitary conditions, the inability for poor parents to afford the tuition. In order to scale, one civil rights group in Uganda charged, Bridge marketed to students already enrolled in public schools rather than trying to recruit students who attended informal schools. In 2016, a Ugandan judge ordered Bridge to close its schools in the country; the startup refused.

17. Test Prep

Year after year after year after year, the most well-funded startups in education technology were those offering tutoring and test prep. Neither of these are progressive trends. These can exacerbate educational inequalities as more affluent families can afford to give their children extra academic support (or at least some extra tips and tricks on how to do well on standardized tests).

To counteract some of these inequalities — in test prep and in the testing itself — the College Board partnered with Khan Academy in 2015 to provide free online SAT prep courses (after years of insisting that test prep would actually make no difference in how well one performed on the exam). One year later, the head of the College Board David Coleman boasted“Never in my career have I seen a launch of technology on this scale that has broken down the racial divisions that so haunt this nation — never.” And in 2017, the College Board released data that showed “20 hours on free Official SAT Practice on Khan Academy is associated with an average score gain of 115 points, nearly double the average score gain compared to students who don’t use Khan Academy.” One problem though: not all students practiced the same amount, and students with highly educated parents tended to spend more time doing so. As Matt Barnum and Sarah Darville write in Chalkbeat, “The College Board’s research doesn’t show whether Khan Academy truly caused the score increases. Perhaps the students who used Khan most were particularly motivated or were using other study aids.”

Perhaps test prep will never undo the inequalities baked into the system. Perhaps, just perhaps, it was never really meant to.

16. Amplify

In 2012, Rupert Murdoch’s media company News Corp launched Amplify, a “new brand,” Fast Company said, “for the completely digital classroom.” The head of the new division: Joel Klein, the former superintendent of the New York City Schools. Speculation that News Corp was planning to enter the education technology market had been brewing since the beginning of the decade, when Klein stepped down from his chancellor position to join the company. In November 2010, News Corp acquired Wireless Generation for $360 million. Wireless Generation had several contracts with the NYC Department of Education: building its student data warehouse, ARIS, and writing the algorithm for School of One, a math program that delivered personalized “playlists” of instruction.

None of this was popular in New York City — not Klein, not School of One, not Aris — it’s worth noting. And the news of an education division also came on the heels of an inquiry into News Corp’s phone hacking scandal. Any honest observer knew Amplify was doomed from the outset.

In the spring of 2013, Amplify unveiled its new tablet at a splashy launch at SXSWedu. In the fall of that year, one of its main customers, Guildford County Public Schools in North Carolina, announced it was recalling all the devices in use in the district after reports that a student’s charger had melted. There were other flaws too: broken screens on about 1500 of its tablets. By 2015, rumors spread that News Corp was winding down Amplify’s hardware business. But it turned out, News Corp wanted out of the education business altogether, placing Amplify up for sale. The company was sold to the Emerson Collective, the venture firm run by Laurene Powell Jobs, for an undisclosed sum.

Rose Crowned Fruit Dove

15. Jeffrey Epstein and the MIT Media Lab

“A decade before #MeToo, a multimillionaire sex offender from Florida got the ultimate break,” Julie K. Brown wrote in The Miami Herald in 2018, detailing the sweetheart plea agreement that Jeffrey Epstein had received that helped him stay out of prison despite a 53-page federal indictment for sexual abuse and sex trafficking. But in July of this year, Epstein was arrested in New York and charged with sex trafficking. Epstein committed suicide while waiting for his trial, an act that has spawned numerous conspiracy theories.

Unsealed documents linked Epstein to a sex ring in which girls were forced to have sex with a number of powerful men, including Prince Andrew, former New Mexico Governor Bill Richardson, and AI pioneer Marvin Minsky. Indeed, many scientists had continued to take Epstein’s money, even after he was jailed in 2007 for soliciting sex with a girl. Among those, Joi Ito, the head of the MIT Media Lab, who posted an apology to his blog for taking money for the Lab as well as for his own startup endeavors. There was, no surprise, a huge outcry, and several professors resigned their positions there. (Lab co-founder Nicholas Negroponte, on the other hand, said that taking Epstein’s money was justified and that relying on billionaires’ largesse was what made the work there possible.) An article in The New Yorker by Ronan Farrow detailed how Ito and others at the Media Lab had covered up Epstein’s donations. Less than a day after it was published, Ito resigned.

Accepting the 2019 Barlow/Pioneer Award from the EFF, danah boyd called for a “great reckoning” in the tech industry, but we will have to wait until the next decade for that apparently. What will come first: that great reckoning? Or Ito’s return to tech?

14. inBloom

The Shared Learning Collaborative was introduced in 2011: “Shared Tools for Teachers: There’s an App for That!,” the Gates Foundation’s Vicki Phillips wrote enthusiastically. Her post explained the possibility of a “huge app store — just for teachers — with the Netflix and Facebook capabilities we love the most.” The Shared Learning Collaborative, she explained, was funded by a$87 million grant from the Gates Foundation and backed by the Council of Chief State School Officers (an organization of state school superintendents that was a major proponent of the Common Core State Standards). The Shared Learning Collaborative had contracted with Wireless Generation to build the Shared Learning Infrastructure, Phillips said, which was to be open source and was designed to deliver“a less expensive, more connected way to store student data with the potential to make student learning more personal.” It’s like she had no idea how many warning bells her post would set off.

SLI was to be comprised of several components, including a student data warehouse, a database of educational content, and a set of APIs that would supposedly move administrative, assessment, and behavioral data about students in and out of various applications and in and out of the main data store. Agreeing to take part in a pilot were districts in Kentucky, Massachusetts, New York, Illinois, North Carolina, Colorado, Louisiana, Georgia, and Delaware.

The Shared Learning Collaborative named a new CEO in 2012: Iwan Streichenberger, a former executive at the interactive whiteboard maker Promethean. And it made its formal debut, rebranded as inBloom, at the 2013 SXSWedu conference, where Streichenberger appeared on stage with Bill Gates.

The organization faced concerns from the outset. A connection to the unpopular Common Core State Standards didn’t help what appeared to be yet another ed-reform effort bankrolled by Bill Gates. The involvement of News Corp’s Wireless Generation irked parents in New York City, one of the districts that was supposed to pilot the technology; And then, of course, there were the fears about data privacy and security. As The New York Times’ Natasha Singer reported, “The inBloom database included more than 400 different data fields about students that school administrators could fill in. But some of the details seemed so intimate — including family relationships (“foster parent” or “father’s significant other”) and reasons for enrollment changes (“withdrawn due to illness” or “leaving school as a victim of a serious violent incident”) — that parents objected, saying that they did not want that kind of information about their children transferred to a third-party vendor.”

Parent protests and lawsuits prompted states to withdraw their participation in the pilot project, and inBloom suffered a major blog when the state of New York passed legislation that restricted its Department of Education from contracting with companies to store and aggregate student data. A little over a year after its launch at SXSWedu, and $100 million in funding later, CEO Streichenberger announced inBloom would be shutting down.

13. Blockchain Anything

There is no good reason to use Blockchain technologies— in education or otherwise. The Blockchain will not prevent cheating. It will not stop people from lying on their resume. It will not make it easier for students to request a copy of their school transcripts. It will not improve college retention rates. It would not have prevented the “college admissions scandal.”

Stop it.

12. The New GED

The GED exam was developed during WWII to award enlisted soldiers who’d left high school before they could graduate a credential indicating that they had the academic skills equivalent to those represented by a high school diploma. The GED has since been used by those who, for whatever reason — fighting fascism or otherwise — did not graduate from high school but still needed to demonstrate they were prepared for employment or educational opportunities.

In 2014, a new GED test, created by the education giant Pearson, went into effect. Aligned to meet new Common Core State Standards, the revised exam was much, much tougher, focusing on “college readiness” rather than the previous emphasis on “workforce readiness”. The new version was also administered via computer — arguably a barrier to many test-takers unfamiliar with the devices — and the price of the test shot up from $80 to $120. Credit cards only, thank you very much. “About 86,500 people passed the new test in 2014, compared with 540,535 in 2013,” The Daily Beast reported.

Although the passing rate for the GED has inched up since then, the number of test-takers has remained low. Less than half as many people took the test in 2017 than in 2012. (To be fair, there are also a number of alternatives to the GED these days.) And as the National Center for Education Statistics stopped collecting GED data in 2013, it’s not entirely clear what has happened to those adults who might be seeking alternative credentials. What we do know: not only has the GED test got tougher — thanks Pearson — federal funding for adult education has also taken a hit during this decade. All that, alongside a steady stream of ed-tech propaganda about the overblown promises of traditional diplomas and degrees and about the potential for new, privatized education alternatives.

11. Altschool

AltSchool was founded in 2013 by Max Ventilla, a former Google executive (his Q&A company Aardvark was acquired by Google in 2010, but he’d worked at the tech giant previously too). When he departed Google, Techcrunch speculated that his next project would be education-related, based on a tweet from his wife — a photograph of a pile of education-related books. Embracing the Silicon Valley mantra of “fail fast and pivot,” Ventilla took that reading list and jumped headfirst into education, hiring engineers and teachers (reportedly hiring one engineer for every teacher) and starting a new, for-profit school. Almost immediately, he was able to raise venture capital for his idea — $33 million in 2014 from the likes of Andreessen-Horowitz.

Silicon Valley loved the idea, of AltSchool, and the investment dollars kept rolling in — almost $180 million in total. In 2015, Wired gushed that Altschool could “save education” through its blend of personalized learning and surveillance technologies. But some parents began to push back, claiming that their children were being used as “guinea pigs” for the startup’s untested software and unproven teaching methods. With an ambitious plan to scale, AltSchool found itself burning through money at an unsustainable pace — $40 million a year according to some reports, and it began shuttering schools instead of opening them.

In June of this year, AltSchool “called it quits,” rebranding as Altitude Learning and planning to sell its software and professional development services to schools.

Red Knobbed Imperial Pigeon

10. Google for Education

In April 2010, Oregon became the first state to open up Google Apps for Education to all its public schools. “This move is going to save the Department of Education $1.5 million per year — big bucks for a hurting budget,” Google boasted on its blog. And for a while, the news was full of stories about this state or that state, this district or that district “going Google” (as well as a fair number of PR from Microsoft about those who opted to stay with its suite of office and email products).

Google introduced the Chromebooks in 2011, and although the hardware was largely panned, Chromebooks found a receptive customer base in education, happy to have a cheaper alternative to the more expensive Apple or Microsoft devices — perfect for the new online assessments mandated by the Common Core. (Apple sneers about this. No surprise as Chromebooks now make up 60% of all laptops and tablets sold to K-12 schools, up from 5% in 2012.) In 2014, the company launched Classroom, its free pseudo-LMS, one of many products aimed at the education market — aimed at making Google the platform for education.

But at what cost comes “free”? Google continues to data mine its education products — it won’t say exactly what data it collects from students. And the company has faced several lawsuits from students over whether the data that’s scanned is used in advertising, still Google’s main business.

It isn’t just the use of student data to fuel Google’s business that’s a problem; it’s the use of teachers as marketers and testers. “It’s a private company very creatively using public resources — in this instance, teachers’ time and expertise — to build new markets at low cost,” UCLA professor Patricia Burch told The New York Times in 2017 as part of its lengthy investigation into “How Google Took Over the Classroom.”

9. Virtual Charter Schools

According to recent data from the Department of Education (as reported by Education Week), “half of all virtual charter high schools had graduation rates below 50 percent in the 2016-17 school year. Thirty-seven percent of schools had graduation rates at or above 50 percent. Graduation data for the remaining 13 percent of schools was masked for various reasons, such as to protect student privacy. There are about 163 virtual charter schools educating over 30,000 seniors nationally as determined by the adjusted cohort graduation rate, according to federal numbers.” In some states — Indiana, for example — not a single virtual charter school has had a graduation rate over 50% for the past four years.

One of the largest studies of online charters, published by Stanford in 2015, found that students in these schools had less contact with a teacher in a week than students in traditional, brick-and-mortar schools have in a day, and their academic progress was so poor that it was almost like not going to school at all.

And yet, despite the abysmal performance of virtual charters, states have struggled to hold these (largely) for-profit endeavors accountable. There were the odd lawsuit, to be sure. K12 Inc, one of the best known virtual charter chains, reached a $8.5 settlement with the state of California in 2016 after it was accused of false advertising and inadequate instruction. In Indiana, two virtual charters were accused of “counting toward their enrollment thousands of students who either never signed up for or completed classes,” collecting almost $10 million in state funding along the way. In Ohio, the Electronic Classroom of Tomorrow, or ECOT, claimed to serve some 15,000 students, but state officials found the virtual charter school was significantly inflating its numbers and said the school owed back $80 million in funding. (ECOT was forced to close in 2018.)

But in spite of the lousy education and unethical business practices, profits at virtual charters are up. K12 Inc, for example, announced this summer that for the first time its history, its revenue had topped $1 billion.

8. LAUSD's iPad Initiative

In 2013, the Los Angeles Unified School District awarded a $30 million contract for Apple, paving the way (supposedly) for an ambitious $1.3 billion plan to give every student in the district an iPad. (It was such a big deal that Apple issued the rare press release.)

There were critics from the outset. Some felt as though the plan was too expensive — the district would pay $678 per device, much higher than the retail cost — and the district’s infrastructure was inadequate for the new demands on its WiFi capacity. Others said that the project was being rushed; the software and curriculum — built by Pearson — was not ready. Still others feared that there hadn’t been sufficient training for teachers on how to incorporate the devices into their classroom and that there hadn’t been the necessary plans for security.

Indeed, almost as soon as the first iPads reached students’ hands, there were reports that students had “hacked” their devices. (In truth, they had merely worked to bypass some of the administrative controls the district had placed on them.) The district quickly put an end to students bringing their iPads home after disputes over who was responsible for lost or stolen devices. The rollout was, from the outset, a total fiasco.

Then, in 2014, local NPR affiliate KPCC launched an investigation into how the district had struck its deal with Apple and Pearson in the first place, publishing emails between the Superintendent John Deasy and Pearson CEO Marjorie Scardino. New allegations surfaced that the bidding process for the project had been unfair. “I don't know why there would have to be an RFP,” one Pearson sales executive had said. Deasy resigned in October, and the district canceled its contract with Apple. In December 2014, the FBI raided district offices, taking with them some 20 boxes of materials pertaining to an investigation into the iPad program. (The federal inquiry ended in 2017 with no charges.)

In 2015, the school board voted on a $6.5 million settlement with Apple over the project. But that money, the district later clarified, would actually be paid by Pearson.

7. ClassDojo and the New Behaviorism

ClassDojo was founded in 2011 and, thanks to the help of over $66 million in venture funding, has become one of the most popular ed-tech apps of the decade — used, according to its website, in 95% of US schools. Originally designed as a behavioral management tool — monitoring, tracking, and rewarding student behavior — it is more apt these days to describe itself as a communication tool: “ClassDojo connects teachers with students and parents to build amazing classroom communities.” ClassDojo has helped behavioral engineering creep from the school into the home.

Increasingly ClassDojo has framed itself as the tool for schools wanting to enhance “social emotional learning,”, developing animations and lessons about “growth mindsets,” “grit,” and “mindfulness.”

ClassDojo and other types of behavior management products claim that they help develop “correct behavior” and “right mindsets.” But what exactly does “correct behavior” entail? And what does it mean if schools entrust this definition to for-profit companies and their version of psychological expertise? As Ben Williamson has observed, “social-emotional learning is the product of a fast policy network of ‘psy’ entrepreneurs, global policy advice, media advocacy, philanthropy, think tanks, tech R&D and venture capital investment. Together, this loose alliance of actors has produced shared vocabularies, aspirations, and practical techniques of measurement of the ‘behavioural indicators’ of classroom conduct that correlate to psychologically-defined categories of character, mindset, grit, and other personal qualities of social-emotional learning.” These indicators encourage behaviors that are measurable and manageable, Williamson contends, but SEL also encourages characteristics like malleability and compliance — and all that fits nicely with the “skills” that a corporate vision of education would demand from students and future workers.

6. "Everyone Should Learn to Code"

Over and over and over this past decade, we were told that “everyone should learn to code.” We were told there is a massive “skills gap”: too few people have studied science, technology, engineering, or math; and employers cannot find enough skilled workers to fill jobs in those fields. But these claims about a shortage of technical workers are debatable, and there’s plenty of data to indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with certain STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

The “jobs of the future” — when pundits and politicians claim to know what they’ll be— will be highly technical and will require computer programming, we’ve been told. The Bureau of Labor Statistics suggests otherwise: two of the fastest growing jobs in the next decade will home health aides and personal care aides — jobs that pay less than $25,000 a year.

But the tech industry has funded plenty of startups, not to mention much of the narrative about learning to code. Politicians have got in on the action as well — take then NYC Mayor Mike Bloomberg, who tweeted in 2012 that his New Year’s Resolution would be learning to code with Codecademy. Thanks in no small part to industry lobbying, legislation passed in some 24 states making computer science a graduation requirement. No matter how useful you believe “computational thinking” to be, this is clearly a powerful industry reshaping curriculum.

All the while, the technology industry has continued to struggle with diversity, despite claiming that it’s working to hire more people of color and women. So if, indeed, schools are forced to bend to Silicon Valley’s will, education will be molded to suit an industry that remains at best exclusionary and at worst utterly toxic.

Luzon Bleeding Heart Dove

5. Gamergate

In the fall of 2014, Deadspin writer Kyle Wagner declared that “The Future Of The Culture Wars Is Here, And It's Gamergate.” “What we have in Gamergate,” he wrote “is a glimpse of how these skirmishes will unfold in the future—all the rhetorical weaponry and siegecraft of an internet comment section brought to bear on our culture, not just at the fringes but at the center. What we're seeing now is a rehearsal, where the mechanisms of a toxic and inhumane politics are being tested and improved.” He was right.

Just a few months earlier, “an angry 20-something ex-boyfriend published a 9,425-word screed and set in motion a series of vile events that changed the way we fight online,” The New York Times recently recalled. “The post, which exhaustively documented the last weeks of his breakup with the video game designer Zoë Quinn, was annotated and punctuated with screenshots of their private digital correspondence — emails, Facebook messages and texts detailing fights and rehashing sexual histories. It was a manic, all-caps rant made to go viral. And it did. The ex-boyfriend’s claims were picked up by users on Reddit and 4chan and the abuse began.” “Gamergate,” as the online harassment campaign came to be called, merged old troll tactics with new troll tactics; and it is clear that it “prototyped the rise of harassment influencers” and the alt-right: Steve Bannon and Breitbart, Milo Yiannopoulos, Mike Cernovich.

If your response here is “what does this have to do with education,” you haven’t been paying attention. It isn’t just that games-based education has had to reckon with Gamergate. (Or not reckon with it, as the case may be. Markus Persson, the creator of Minecraft, acquired by Microsoft in 2014, likes to use the word “feminist” as an insult and has claimed that gender doesn’t exist in Minecraft— a game where the main character is a blocky dude called Steve, for crying out loud.) The Verge recently wrote that“As misinformation and hate continues to radicalize young people online, teachers are also grappling with helping their students unlearn incorrect, dangerous information. ‘It has made a lot of us teachers more cautious,’ they say. ‘We want to challenge our students to explore new ways of thinking, to see the cultural meaning and power of video games, but we’re understandably anxious and even scared about the possible results.’”

4. "The Year of the MOOC"

The New York Times declared 2012 “the Year of the MOOC.” And I chose "The Year of the MOOC" rather than "The MOOC" as the big disaster because I wanted to underscore how much of the problem here was the PR, the over-promising and the universities and startups believing their own hype.

The hype about massive open online courses really began in late 2011, with huge enrollment in three computer science courses that Stanford offered for free online during the Fall semester, along with the announcement of MITx in December. But starting in January 2012, the headlines — and the wild promises — about MOOCs were incessant: Sebastian Thrun announced he was leaving Stanford to found Udacity, predicting that in 50 years, “there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them”; Stanford professors Andrew Ng and Daphne Koller unveiled their new startup, Coursera; and MIT and Harvard announced the formation of edX. Hundreds of millions of dollars of investment were funneled into these (and other) MOOC providers. Universities scrambled to publicize their partnerships with them. (In June 2012, the University of Virginia Board of Trustees fired President Teresa Sullivan, contending she was too slow to jump on the MOOC bandwagon even though the school was already in talks to join Coursera.)

MOOCs, like so many ed-tech products, were declared “revolutionary,” with hundreds of thousands of students signing up to listen to video-recorded lectures, take online quizzes, and chat in forums. (The technology, despite the deep computer science knowledge of the venture-capital backed MOOCs’ founders, was mostly crap.) Hundreds of thousands of students signed up, but very very few finished the courses (and most “successful” MOOC students already had college degrees) — a potential counter to any claim that these free online classes were going to extend educational opportunities to everyone and replace a university education. The dropping-out was often dismissed as a feature, not a bug. Students were just curious; they never planned on completing the course, advocates insisted.

But as MOOC providers started to work more closely with universities to offer their courses for credit, the low completion rates (arguably) mattered more. In a high profile announcement in early 2013, California Jerry Brown, San Jose State University President Mo Qayoumi, and Udacity CEO Sebastian Thrun unveiled a pilot program that marked a first for the state: San Jose State would award college credits for special versions of select Udacity classes. The program would “end college as we know it,” Techcrunch cooed. But just a few months later, citing concerns about the quality of the courses, San Jose State put the project on pause. “We have a lousy product,” Sebastian Thrun told Fast Company that fall, saying with a shrug he didn’t even like the word “MOOC.”

Investors continued to fund MOOCs nonetheless. Coursera has raised over $310 million. (Richard Levin, the former President of Yale who led that schools failed online initiative AllLearn back in the early 2000s, was briefly CEO.) Udacity has raised some $160 million. But even with all that venture capital to keep the lights on, MOOCs have had to look for some sort of revenue model. So gone — mostly gone, at least — are the free online courses. Gone are the free certificates. The MOOC revolution simply wasn’t.

3. Venture Capitalism

Billions of dollars of venture capitalism has been funneled into education technology this decade. And this list demonstrates what we have to show for it.

2. (Venture) Philanthropy

Billions of dollars of (venture) philanthropy has been funneled into education technology this decade. And this list demonstrates what we have to show for it.

In many ways, philanthropy and venture capital worked hand-in-hand — the former set the policy agenda for ed-tech and the latter fueled the entrepreneurs and stoked the market for it.

The Gates Foundation has continued to have an outsized influence in shaping public policy (even though Gates has admitted that his initiatives haven’t had a great track record).

He has been joined this decade by other tech billionaires — by Mark Zuckerberg, for example. The Facebook founder made his first, high profile education donation in 2010, announcing on Oprah, with New Jersey Governor Chris Christie and Newark Mayor Cory Booker at his side, his plans to give the city’s schools $100 million. In 2012, Zuckerberg launched the Chan Zuckerberg Initiative, he and his wife Priscilla Chan’s philanthropic company. And it is, to be clear, a company, not a foundation — a business structure that gives Zuckerberg more control in investing in for-profit companies and in political causes, as The New York Times explained.

Amazon’s Jeff Bezos also announced a philanthropic effort (that is, when he’s not spending his billions on funding space travel).His new philanthropic effort would fund and operate “a network of high-quality, full-scholarship, Montessori-inspired preschools in underserved communities.” “We’ll use the same set of principles that have driven Amazon,” Bezos wrote in a note posted to Twitter. “Most important among these will be genuine intense customer obsession. The child will be the customer.”

These philanthropists’ visions for the future of education and education technology mirror their own businesses: the child will be the customer. The child’s data will be mined. The child’s education will be personalized.

1. Anti-School Shooter Software

The most awful education technology development of the decade wasn’t bankrolled by billionaire philanthropists. Rather it emerged from a much sicker impulse: capitalism's comfort with making money off of tragedy. It emerged from this country’s inability to address gun violence in any meaningful way. And that is despite a decade that saw a steady rise in the number of school shootings. “10 years. 180 school shootings. 356 victims,” CNN reported this summer. That is despite, at the beginning of the decade, a shooting that left 20 second graders dead. instead of gun reform, we got anti-school shooter software, a culmination sadly of so many of the trends on this list: surveillance, personalization, data mining, profiling, Internet radicalization, predictive analytics.

For a while, many ed-tech evangelists would bristle when I tried to insist that school security systems and anti-school shooting software were ed-tech. But in the last year or so, it’s getting harder to deny that’s the case. Perhaps because there’s clearly a lot of money to be made in selling schools these products and services: shooting simulation software, facial recognition technology, metal detectors, cameras, social media surveillance software, panic buttons, clear backpacks, bulletproof backpacks, bulletproof doors, emergency lockdown notification apps, insurance policies, bleeding control training programs, armed guards, and of course armed teachers.

“Does It Make More Sense to Invest in School Security or SEL?” Edsurge asked in 2018. Those are the choices education technology now has for us apparently: surveillance or surveillance.

What an utter failure.

Victoria Crowned Pigeon

The Dial-a-Drill: Automating the Teacher with Ed-Tech at Home

$
0
0

Yes, it has been a little quiet on Hack Education lately. Even HEWN has gone dormant. I will write some more about the silence soon. Meanwhile... These remarks were made during a guest visit to Dan Krutka's class, "Critical Perspectives on Learning Technologies," at the University of North Texas. (Let me know if you are interested in my speaking to your Zoom school.)

Greetings. Thank you very much for inviting me to speak to you today. You have found yourself enrolled in perhaps the most perfectly and horribly timed class, as you're asked to think about education technology critically, right as the coronavirus crisis plays out across schools and colleges worldwide and as students and teachers are compelled — even more than usual — to turn over their educational experiences to ed-tech.

I want to talk at you briefly about some history of ed-tech — I'll yammer on for about 20 minutes or so, and then I hope we can have a discussion.

But first, a little preface on why history, why now...

I realize that most everyone — particularly on social media — wants to talk about the now or the future. But even with everything currently upside down, I insist that knowing the history of education technology is crucial. History is useful not just because of some "lessons" that we might glean from the past. But whether we actively recognize it or not, where we are today is an outgrowth of history — our systems, our practices, our beliefs. There is no magical "break" from the past, even if it feels like everything right now is different. History informs and shapes us. All our decisions about the now and the future involve some notion of what has come before — even if those notions are wildly incorrect (such as the very common narrative — a favorite of the current Secretary of Education — that schools have not changed in over one hundred years). It's worth correcting these notions, of course. And it's also worthwhile to stop and maybe calm the people who are throwing up their hands right now and insisting that "all this is completely unprecedented!" Because there is historical research and social scientific research about education and about ed-tech — knowledge that can help us think through this moment and moments to come more sensibly.

We do know — that is, we have decades of research that demonstrates — that some ed-tech works okay for some students in some subject areas under some circumstances. We also know that all of those qualifications in that previous sentence — which ed-tech, which students, which subject areas, what sort of circumstances — tend to play out in ways that exacerbate existing educational inequalities. Under normal conditions, ed-tech is no silver bullet. Why the hell would we expect it to be now?!

We also know — that is, we have decades of history that demonstrates — that the push to automate education is not new nor is it dependent on how well the software works. Moreover, we know that technology is often called upon to address social and economic and political and yes sure medical crises facing education. Crises like World War I. Polio outbreaks. Sputnik. The rising cost of college. Not enough teachers. Not enough punch card operators. Not enough computer programers. All occasions when politicians and administrators and business officials (and less often, educators) insisted that technology — machines that teach — would save us. We know that, as historian Larry Cuban wrote almost twenty years ago, computers — the latest teaching machines — have been "oversold"; their promises largely unfulfilled.

I want to tell you a story about computer-assisted instruction (CAI) in the 1960s. I want to do so, in part, because this actually isn't a story I tell in my forthcoming book, Teaching Machines, and I feel like getting some of this down on paper. And I want to do so, in part, because I think it's worth thinking about how an idea that is almost a century old now is still being heralded as revolutionary. And finally, I want to tell this particular story about computer-assisted instruction because I think there are several important strands that I hope this class has prepared you to pick up on: I want you to think about who is the target population for this ed-tech intervention; I want you to think about the space — the physical space — in which this intervention occurs and what it means and what it feels like to work there; and I want you to think about how this technology imagines the labor of teaching and learning and how it constructs both the learning and the learner.

In 1962, Patrick Suppes, whose name is one of those most closely associated with the development of CAI, and his fellow Stanford professors, Richard Atkinson (who'd later become the president of the University of California) and William Estes, received a million dollar grant from the Carnegie Corporation — that's about $8 million in today’s dollars — to study the computer-assisted teaching of math and to construct an automated computer-based program.

1962 was, of course, before the advent of the personal computer, but at Stanford, Suppes and his team had access to an IBM 7090 and a PDP-1, two mainframe computers. In their laboratory, they established six "learning stations" where first-grade students were brought in five days a week for 20-minute sessions to work on a computer-based math curriculum. The logistics were obviously challenging, and so the project was soon moved off the Stanford campus and into the students' school — but that too was easier said than done. You couldn't put a giant mainframe computer in an elementary school building, although it was possible to install "dumb" terminals that could connect back to the mainframe via a phone line. First, you had to convince the phone company that it was okay to send signals over the wire this way. And then you had to find room for the terminals — remember, the "baby boom" had left many schools overcrowded. The only free space for terminals: the storage closet. So that's where the students were sent, one at a time, to work on their math lessons — "drill and kill" type exercises.

In his 1964 report to his funder, Carnegie, Suppes explained how the teletype — a modified typewriter keyboard and a ticker tape printer connected to the Stanford mainframe, was used daily at Walter Hays Elementary School in Palo Alto (one of the highest ranking elementary schools in the state of California, it's worth noting):

We are "on the air" for about 2 1/2 to 3 hours with a class of 40 students and we attempt to process all 40 students during that period. Each student is at the teletype terminal from 2 to 5 minutes. They are very efficient about it, not losing more than 20 or 30 seconds in arriving at the terminal or leaving it. We ask them to begin by typing in their names and hitting the return key. Timing on the rest of the drill and practice work is controlled by the program. What we are finding is that when detailed and objective control of the environment is possible, we can hope to train a student to a level of accuracy and perfection of response that is very difficult to achieve in a classroom environment.

(For the sake of time, I am going to skip over a whole discussion here about B. F. Skinner and operant conditioning and the training of children and the training of pigeons. But sufficed to say, do not let anyone try to tell you that computer-assisted education rejected behaviorism and embraced cognitive science.)

When the student entered their name on the teletype, the computer would pull up their file and determine the exercises to give them based on how they'd done the previous day — the concept to work on — say, least common multiples — as well as the level of difficulty. These exercises typically consisted of twenty questions, and students had ten seconds to answer each one. If they answered incorrectly, the computer would provide the correct answer. At the end of the student's time at the machine, the teletype would print out the student's score, along with the average of previous sessions. It would conclude with a friendly farewell: "GOOD BYE, O FEARLESS DRILL TESTER." And then "TEAR OFF ON DOTTED LINE" — the student would grab the receipt from their session.

Suppes argued — as had earlier proponents of teaching machines — that, in this way, computers would "individualize" education. "The computer makes the individualization of instruction easier," he wrote, "because it can be programmed to follow each student's history of learning successes and failures and to use his past performance as a basis for selecting new problems and new concepts to which he should be exposed next." The computer, he believed, would act as a tutor — a personal tutor — and take over from the teacher classroom instruction. He predicted in a 1966 article for Scientific American that "in a few more years millions of school children would have access to what Philip of Macedon's son Alexander enjoyed as a royal prerogative: the personal services of a tutor as well-informed and responsive as Aristotle."

By 1967, Suppes' computer-based curriculum was in use in seven schools in California. The first city in which every elementary school student learned math through his computer-based system, Suppes boasted, was McComb, Mississippi — a far cry from the elementary school down the street from Stanford. "Computer assisted instruction may well be a technique suited for closing the educational gap," according to a 1969 report on the project in McComb, adding that drill-and-kill exercises via computer might not be suitable for more advantaged (read: white) students.

In 1967, Suppes founded a company the Computer Curriculum Corporation, which sold its "basic skills" courseware, along with IBM mainframe computers and terminals, to elementary schools, securing a large contract in its first year with the Chicago Public Schools expressly to provide remediation to struggling students. Suppes' company also struggled. As you can imagine, this all required an incredible financial investment from school districts. Computer-based education didn't save money; it actually cost more. Critics decried the computer as simply a "thousand dollar flash card." And frankly, the results just weren't that spectacular: students who used computer-assisted instruction did about as well as students who received traditional teacher-based instruction. (Incidentally, Suppes' company did not go out of business, thanks in part to an influx of federal dollars that supported programs offering remediation for disadvantaged students — by the late 1960s, urban renewal had replaced Sputnik as the educational crisis that warranted technological intervention. Computer Curriculum Corporation was sold to Simon & Schuster in 1990.)

In addition to installing terminals in schools, CCC offered another a product called Dial-a-Drill. Rather than a teletype, the Dial-a-Drill used the telephone; it would call the student at home and pose questions that the student would then answer by pushing the buttons on the telephone in response. Let me repeat that — the program called the student; the student did not call the program. It would also, in some cases, call the parent to drill them on mathematical concepts too.

A Computerworld article in 1970 touted the progress students in New York City made after receiving three five-minute drills per week. “One reason for the one-year-old program’s apparent success,” the article proclaimed, “is that a student cannot 'fail' in the conventional sense. The DEC PDP-8/I automatically adjusts the drills to fit the level of difficulty at which the student may best operate. When a student has demonstrated mastery at one of the 25 levels, the computer moves him up a notch.” Echoing what has been the promise of education technology since at least the 1920s, the machine would enable students to move at their own pace, delivering lessons and quiz questions at the right skill level so as to minimize errors.

Although Suppes boasted that Dial-a-Drill could teach math, reading, foreign language, and computer programming, there were obvious limitations to how any subject could be taught using the system. It was, as the name suggested, designed for drill exercises. But drills — more drills more often — were precisely what many in the “back to basics” movement at the time were calling for students to do — less of that "new math" and more repetition of basic skills.

The zeal for computer-based instruction broadly and for Dial-a-Drill specifically did not last. An op-ed in InfoWorld in 1980, questioned whether or not these sorts of exercises were what computers should be used for. "One of the biggest potentials for personal computers is taking education out of the school and putting it back in the home," Infoworld argued. "Certainly programs like Dial-a-Drill will be one of the options available — an easy way for parents to turn over the responsibility for part of their children's education to a remote computer. What we must insure is that the potential benefits of this high-tech hickory stick do not come at the expense of more innovative educational software."

Of course, computer-assisted instruction never really went away; it re-emerged with the personal computer; it re-re-emerged with mobile devices. And I think we should ask too whether or not it's actually gotten any better, whether or not we are still surrounded by a bunch of "high-tech hickory sticks." And who gets the hickory stick, and who gets the "more innovative educational software"?

Even before the advent of "remote learning" this spring, we could also see the ways in which, as that Infoworld story gleefully predicted, educational computing was creeping into the home. As a replacement for school? I don't think so, not for the vast majority of students. An extension of school? Maybe. An extension of ed-tech corporations' reach? Definitely. So now, with students at home, online, it's more important than ever to think about what this creep might mean — particularly for disadvantaged students, particularly for educators.

The History of the Future

$
0
0

Here are the transcript and slides of the talk I gave today at CUNY. Well, not at CUNY. The conference was called "Toward an Open Future," as I guess you might gather from my presentation.

Thank you very much for inviting me to speak to you today. There is, no doubt, a part of me that is disappointed that we cannot all be together in person; then there's the part of me that absolutely cringes at the idea of ever being in a group larger than six or seven people again.

I do thank the organizers, I will add, for not canceling today's event when it couldn't be held in person. My bank account thanks you. Like almost everyone, any sort of financial stability I might have once had has been completely upended. So I very much appreciate the work.

Although some communities have listed journalists as "essential workers," no one claims that status for the keynote speaker. The "work" of being a keynote speaker feels even more ridiculous than usual these days. I'm not a practitioner in the field. I don't work in a school or a library. I don't teach. I don't maintain the learning management system or the library management software. I don't help faculty move their courses online. I'm not even confident I can share my screen or my slides in Zoom correctly. Me, I just sit on the sidelines and take notes as everything ed-tech passes by, like some snitty judge on Project Runway, knowing full well that contestants had, like, 24 hours to pull together a ball gown out of burlap scraps and kitchen twine and still having the audacity to tell them that, meh, it just wasn't my favorite look, wouldn't flatter most bodies, wouldn't make the anyone feel at all good wearing it.

I'm not a cheerleader or a motivational speaker, and for the first time ever, I sort of wish I was because I want to tell you enthusiastically that you're all doing great and then you'd all leave Zoom feeling great. I wish I could tell you honestly that everything's going to be okay. No one ever hires me to say "everything's going to be okay,” of course. I'd say that the role of the critical keynote speaker is awkward and unpleasant under the best of circumstances. And these are not the best of circumstances.

So I've thought a lot about what I wanted to say to you today. What does one say to the faculty and staff and students of CUNY right now? What does one say to people who live and work in New York? What does one say to anyone who works in education — at any institution anywhere. Is there any other message than "My God, I feel so sick and sad and angry and powerless to help."

I thought at one point I'd just do something completely different and unrelated to the conference theme. Maybe I'd just tell you a story. A good keynote speaker should be a good storyteller, after all. Like, I'd tell you about the origins of Sea Monkeys — I did actually give a talk on that several years ago, because that conference had the word "reconstitute" in its theme and after hearing that I could not think of anything else other than advertisements in the back of comic books and the instructions to "just add water." Or maybe I'd tell you a little bit about pigeons — I've done a whole keynote on that topic too — and on how the birds have been used as weapons of war and as models for education technology. But I hate repeating myself. So, maybe, I thought, I could find a nice metaphor we all could relate to right now that ties into the themes here today of "open," resilience, and care — like how my friends are mailing one another sourdough starters, even though it's impossible to find yeast or flour in the grocery stores, even though, as I'm mildly gluten intolerant, I really shouldn't be eating bread. I didn't think I could pull off a 40-minute talk on sourdough and open pedagogy — but someone should try.

So I'm going to try to stick to the theme as it was given to me back in December — truly a lifetime ago: "Toward an Open Future." The other speakers today are going to do a great job of talking about that adjective "open," I'm sure. If I recall correctly, the last time I spoke at CUNY, I talked about some of the problems I have with "open" and the assumptions that are often made around "open" and labor. You can find the transcript of that talk on my site, if you're curious.

So instead of "open" — others have it covered — I've decided I'm going to tackle the preposition and the noun in that clause, "Toward an Open Future." Mostly the noun. I want to talk to you today about the future — and I want to do so for mostly selfish reasons, I won't lie. That's how we keynote speakers roll. It's all about us. Even more so in this awful Zoom format where I can't see you roll your eyes or groan at my jokes. But I want to talk about the future strangely enough because I'm sick of it. I am utterly exhausted by all the pontification and speculation about what is going to happen in the coming weeks and months and years to our world. I am exhausted, and I am frightened. And if I hear one more ed-tech bro talk about the silver linings of the coronavirus and how finally finally school has been disrupted and will never be the same again, I'm gonna lose my shit.

In talking about the future, I don't come here to predict or promise anything, although my goodness, keynote speakers really do love to do that too. I want to talk a little bit about how we have imagined the future and predicted the future in the past, how that's changed (or not changed) over time, and how we ended up with a consultancy class of futurists, whose work it is to predict and prepare our institutions for tomorrow — a consultancy class of futurists who are probably going to have gigs at schools long after swaths of staff have been furloughed or fired.

I am fascinated, as the subtitle of my website probably indicates, by "the history of the future of education." I think it's interesting to consider this history because we can see in it what people in the past hoped that the future might be. Their imaginings and predictions were (are) never disinterested, and I don't agree at all with the famous saying by computer scientist Alan Kay that "the best way to predict the future is to build it." I've argued elsewhere that the best way to predict the future is to issue a press release. The best way to predict the future of education is to get Thomas Friedman to write an op-ed in The New York Times about your idea and then a bunch of college administrators will likely believe that it's inevitable.

Now, I will confess here, my academic background is in folklore and literature, so when I hear the word "futurists," what comes to mind first are the Italian Futurists, proto-fascists many of them, who rejected the past and who embraced speed, industry, and violence. "We wish to destroy museums, libraries, academies of any sort," Filippo Marinetti's manifesto of Futurism, published in 1909, read, "and fight against moralism, feminism, and every kind of materialistic, self-serving cowardice." I mean, you do hear echoes of Italian Futurism among some of today's futurists and tech entrepreneurs, unfortunately. Take Anthony Levandowski, the creator of a self-driving car program and a religion that worships AI, for example: "The only thing that matters is the future," he told a reporter from The New Yorker. "I don't even know why we study history. It's entertaining, I guess — the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn't really matter. You don't need to know that history to build on what they made. In technology, all that matters is tomorrow." So part of me, I won't lie, is fascinated by the history of the future because I like to think that nothing would annoy a futurist more than to talk about the history of their -ism and to remind them someone else had the same idea a hundred years ago.

In all seriousness, the history of the future is important because “the future” isn't simply a temporal construct — "Tomorrow, and tomorrow, and tomorrow, Creeps in this petty pace from day to day, To the last syllable of recorded time," as Macbeth says. The future — as Macbeth figures out, I suppose — is a political problem. The history of the future is a study of political imagination and political will.

No doubt, we recognize this when, in education circles, we hear the predictions that pundits and entrepreneurs and politicians and oh my goodness yes consultants offer. A sampling of these from recent weeks, years, centuries:

"Books will soon be obsolete in schools"— Thomas Edison, patent troll (1913)

"By 2019, about 50 percent of high school courses will be delivered online"— Clayton Christensen and Michael Horn (2008)

"Was talking with someone last week about something unrelated + he remarked that he's upset about paying $80K for his daughter to 'watch college lectures online' and it dawned on me that this could be the thing that finally bursts the bubble that Peter [Thiel] was talking about years ago"— Alexis Ohanian, co-founder of Reddit and husband of the greatest athlete of all time (2020)

"I think this crisis is the worst thing that could have happened to ed-tech. People can now see just how impractical and inferior it is to face to face classrooms. It can't pretend anymore to be the next big thing. The world tried it, for months. Game over."— Tom Bennett, founder of researchED (2020)

Online learning will be the "silver lining" of the coronavirus — Sal Khan, founder of Khan Academy (2020)

"It's a great moment" for learning — Andreas Schleicher, head of education at the OECD (2020)

"You're going to have a lot of young people who have experienced different forms of learning in this crisis, learning that was more fun, more empowering. They will go back to their teachers and say: can we do things differently?""— Andreas Schleicher, again (2020)

"In 15 years from now, half of US universities may be in bankruptcy. In the end I'm excited to see that happen. So pray for Harvard Business School if you wouldn't mind." - Clayton Christensen, Harvard Business School professor (2013)

"In 50 years, there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them."– Sebastian Thrun, founder of Udacity (2012)

"Software is eating the world"— Marc Andreessen, venture capitalist and investor in Udacity (2011) and author of a blog post last week in which he laments that no one builds things in the world, for the world anymore

Now, unlike the epidemiological models and graphs that we've grown accustomed to reading, the statements I just read aren't really based on data. They're mostly based on bravado. The best way to predict the future, if you will, is to be a dude whose words get picked up by the news.

These predictions are, let's be honest, mostly wishful thinking. Dangerous but wishful thinking. Fantasy. But when they get repeated often enough — to investors, university administrators, politicians, journalists, and the like — the fantasy becomes factualized. (Not factual. Not true. But "truthy," to borrow from Stephen Colbert's notion of "truthiness.") So you repeat the fantasy in order to direct and control the future. Because this is key: the fantasy then becomes the basis for decision-making.

Some futurists do build models, of course. They assure us, their claims are based on research — it's "market research" often, as the history of Cold War-era futurism is bound up in supply chain management. They make claims based on data. They make graphs — proprietary graphic presentations — that are meant to help businesses, schools, governments (or the administrators and executives therein, at least) make good decisions about technology and about what is always, in these futurists' estimation, an inevitably more technological future. The Forrester Wave, for example. Disruptive innovation. The Gartner Hype Cycle.

According to the Gartner Hype Cycle, technologies go through five stages: first, there is a "technology trigger." As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage, the "peak of inflated expectations," after so many promises are made about this technological breakthrough. Then, the third stage: the "trough of disillusionment." Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly — this is the "slope of enlightenment." Eventually the new technology becomes mainstream -- the "plateau of productivity."

It's not that hard to identify significant problems with any of these models. Take the Hype Cycle. It's not a particularly scientific model. Gartner's methodology is proprietary, no surprise — in other words, hidden from scrutiny. Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place emerging technologies on the curve. When Gartner uses the word "methodology," it is trying to signify that its futurism is a "science," and what it really means is "expensive reports you should buy to help you make better business decisions or at the very least to illustrate a point in a webinar."

Can the Hype Cycle really help us make better decisions? I mean, look around us. After all, it doesn't help explain why technologies move from one stage to another. It doesn't account for precursors that make acceptance of a new technology happen more smoothly — new technologies rarely appear out of nowhere. Nor does it address the political or social occurrences that might prompt or preclude technology adoption. In the end, it is simply too optimistic, unreasonably so, I'd argue. No matter how silly or useless or terrible a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted.

Where would you plot the Segway?

In 2008, ever hopeful, Gartner insisted that "This thing certainly isn't dead and maybe it will yet blossom." Maybe it will, Gartner. Maybe it will.

Where would you plot Zoom?

And here’s the thing: the idea that we would even want Zoom — or any digital technology, really — to end up on a "plateau of productivity" revolts me. I'd prefer to reside in the jungle of justice, thank you, on the outskirts of this whole market-oriented topography.

And maybe this gets to the heart as to why I don't consider myself a futurist. I don't share this excitement for an increasingly technological future; I don't believe that more technology means the world gets better. I don't believe in technological progress.

That is, of course, a wildly un-American position to hold.

This is "American Progress," an 1872 painting by John Gast. It was commissioned by George Crofutt, the publisher of a travel guide magazine called Western World. Crofutt wanted Gast to paint a "beautiful and charming female… floating westward through the air, bearing on her forehead the 'Star of Empire.'" In her right hand, the figure was to hold a textbook, "the emblem of education... and national enlightenment." And with her left hand, Crofutt explained, "she unfolds and stretches the slender wires of the telegraph, that are to flash intelligence throughout the land." Education, as this painting so perfectly depicts, has been bound up in notions of technology — "progress!" — since the very beginnings of this country.

It should be noted too that, as Crofutt directed, the painting also shows the Native Americans "fleeing from Progress." They "turn their despairing faces toward the setting sun, as they flee from the presence of wondrous vision. The 'Star' is too much for them." So education, let us add, has been bound up in notions of technology and Empire and white supremacy. "Progress," we're told. "Inevitability." I reject that.

The frontier — phew, there's another way in which "open" is a problematic adjective, eh? but that's a topic for another talk — remains an important metaphor in the American conceptualization of the future. New places, new fields for exploration and conquest. There's that bravado again — just like in the predictions I read to you earlier — that confidence even when stepping into the unknown.

There have been times, no doubt, when that confidence was shaken. Now is surely one of those times. The future feels very uncertain, very unclear. It'll be a boon for those futurist consultants. It'll be a boon for those who offer predictive models and who sell predictive analytics. People want to know how to prepare. That's understandable. But I'm not sure futurist-consultants have ever helped us prepare — certainly not for a public sphere, including education, that is resilient or just.

Here is why, I'd argue, the history of the future might be worth thinking about. Because much of what futurists do today — their graphs and their reports — was developed in the Cold War era. These practices emerged in response to the totalitarianism that the earlier futurism — its love of war and machines and speed — had become. The field of futurism — "futurology" — was facilitated, no doubt, by advances in computing that made possible more powerful multivariate analysis, simulation, modeling. It coincided as well with the rise of behavioral psychology — this is the part where I could talk a lot about pigeons — and the desire to be able to efficiently engineer systems, society, people.

Perhaps one of the most famous future-oriented think tanks, the RAND Corporation — RAND stands for Research ANd Development — was founded in 1948. It grew out of Project RAND, a US Air Force project that worked on new analytical approaches to war. The RAND Corporation applied these approaches to civilian matters as well — urban planning and technology adoption, along with space exploration and nuclear warfare, for example. The RAND Corporation helped develop game theory and rational choice theory, following the publication of John von Neumann and Oskar Morgenstern's book A Mathematical Theory of Games and Human Behavior, which introduced the Prisoners Dilemma theorem. (von Neumann was a consultant at RAND. He'd worked on the Manhattan Project, as many at RAND had, and was, of course, a pioneer in the development of computing.)

The RAND analysts soon found that there were limitations to what game theory could predict about decision-making. So they began experimenting with other kinds of simulations, particularly ones that involved exercises in imagining alternative futures and then, of course, shaping behaviors and attitudes in such a way as to attain the desired outcomes. In 1964, RAND researchers Olaf Helmer and Theodore Gordon released a report called A Long Range Forecasting Study, in which they explained their new technique, what they called the Delphi method. Not "long term," let's note; "long range" — a spatial concept not a temporal one, and concept tied to military strategy, to frontiers.

The report "describes an experimental trend-predicting exercise covering a period extending as far as fifty years into the future," the authors wrote. "The experiment used a sequence of questionnaires to elicit predictions from individual experts in six broad areas: scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapons systems. A summary of responses from each round of questionnaires was fed back to the respondents before they replied to each succeeding round of questionnaires." The Delphi method solicited the opinions of "experts" but then it steered those opinions towards a consensus. (This method, incidentally, has been used to develop the Horizon Reports in education technology that purported to identify ed-tech trends "on the horizon.") Again, this wasn't so much about predicting the future, despite the reference to the great oracle at Delphi, as it was about shaping the future, shaping behavior — the behavior of experts and in turn the behavior of decision-makers. This forecasting was actually about management, about control.

The tools and methods for modeling war-games — for predicting in the Cold War era the actions of communist regimes abroad — were turned towards American society — for predicting forms of social unrest at home.

This kind of futurism — one that relies on the rationality of scientific men and their machines in the service of liberalism and corporate capitalism and Empire — is, of course, just one way of thinking about the future. But it has remained a powerful one, permeating many of our institutions — the OECD, the World Economic Forum, The Wall Street Journal and the like. And in doing so, this kind of futurism has foreclosed other ways of imagining the future — those based on emotion, care, refusal, resistance, love.

That is, I think, what this conference gets at with its theme "Toward an Open Future." It is a reimagining of the teaching and learning and research, one that we must face with great urgency. We have to think about, we have to talk about, we have to make strides toward an open future before the futurist-consultants come in with their predictive models and techno-solutionism and tell the bosses they have to sell off the world to save it. These futurists promise certainty. They promise inevitability. And with their models, no one bears responsibility. "It was the algorithm," they shrug.

In 1961 — while the Cold War future-forecasters were building their models and their corporate client-base — Hannah Arendt published a series of essays titled Between Past and Future in which she argued that, by severing ties to history — by embracing that sort of futurism that led to totalitarianism and World War II but also by espousing theories and practices of behavioral engineering — that we had forsaken our responsibility for a meaningful, human future. We had surrendered a future in which we are free to choose. (Curse Milton Friedman for thoroughly ruining that turn of phrase.)

A responsibility to the world and to the past and to the future, Arendt argues, should be at the core of our endeavors in education. "Education is the point at which we decide whether we love the world enough to assume responsibility for it," she writes, "and by the same token save it from that ruin which, except for renewal, except for the coming of the new and young, would be inevitable. And education, too, is where we decide whether we love our children enough not to expel them from our world and leave them to their own devices, nor to strike from their hands their chance of undertaking something new, something unforeseen by us, but to prepare them in advance for the task of renewing a common world."

That renewal always brings with it uncertainty, despite the predictions that the consultants and op-ed columnists want to sell us — predictions of a world that is hollowed out, closed off, sold off, "safe." Remember: their predictions led us here in the first place, steering management towards institutional decay. I saw someone on Twitter ask the other day, "Why are schools better prepared for school shootings than they are to handle cancellation and closure?" I think we know why: because that's the future schools were sold. One of surveillance and control. It's the same future they’re going to repackage and rebrand for us at home. Let me repeat what I said earlier: the history of the future is a study of political imagination and political will. The future is a political problem.

We do not know what the future holds. Indeed it might seem almost impossible to imagine how, at this stage of the pandemic with catastrophic climate change also looming on the horizon, we can, as Arendt urges, renew a common world. And yet we must. It is our responsibility to do so. God knows the consultants are going to try to beat us to it.

School Work and Surveillance

$
0
0

I was a guest speaker in the MA in Elearning class at Cork Institute of Technology this morning. Thanks very much to Gearóid Ó Súilleabháin for the invitation. Here's a bit of what I said...

Thank you for inviting me to speak to your class today. This is such a strange and necessary time to talk about education technology, to take a class about education technology, to get a degree in education technology because what, in the past, was so often framed as optional or aspirational is now compulsory — and compulsory under some of the worst possible circumstances. So it's a strange and necessary time to be a critic of education technology because, while I've made folks plenty angry before, now I am under even more pressure to say something, anything nice about ed-tech, to offer reassurance that — over time, by Fall Term surely — the tech will get better.

I can't. I'm sorry.

It's also an deeply uncomfortable time to be an American with any sort of subject matter expertise — it has been since well before the 2016 election, but particularly since then. I don't want to come off today as making broad sweeping statements about all of education everywhere when I'm very much talking about the education system in the US and the education technology industry in the US. So grain of salt and my apologies and all that.

One of the reasons that I am less than sanguine about most education technology is because I don't consider it this autonomous, context-free entity. Ed-tech is not a tool that exists only in the service of improving teaching and learning, although that's very much how it gets talked about. There's much more to think about than the pedagogy too, than whether ed-tech makes that better or worse or about the same just more expensive. Pedagogy doesn't occur in a vacuum. It has an institutional history; pedagogies have politics. Tools have politics. They have histories. They're developed and funded and adopted and rejected for a variety of reasons other than "what works." Even the notion of "what works" should prompt us to ask all sorts of questions about "for whom," "in what way," and "why."

I want to talk to you a bit today about what I think is going to be one of most important trends in education technology in the coming months and years. I can say this with some certainty because it's been one of the most important trends in education technology for a very long time. And that's surveillance.

Now, I don't say this to insist that surveillance technology is inevitably going to be more important, more pervasive. Me personally, I don't want the future of education to be more monitored, data-mined, analyzed, predicted, molded, controlled. I don't want education to look that way now, but it does.

Surveillance is not prevalent simply because that's the technology that's being sold to schools. Rather, in many ways, surveillance reflects the values we have prioritized: control, compulsion, efficiency. And surveillance plays out very differently for different students in different schools — which schools require schools to walk through metal detectors, which schools call the police for disciplinary infractions, which schools track what students do online, even when they're at home. And nowadays, especially when they're at home.

In order to shift educational institutions away from a surveillance culture, we are going to have to make a number of changes in priorities and practices — priorities and practices already in place long before this global pandemic.

Historically, a good deal of surveillance has involved keeping abreast (and control) of what the teacher was up to. She — and I recognize that teachers aren't always female, but the profession is certainly feminized — is alone in a classroom of other people's children, after all. And I'll return to this notion of teacher surveillance in a bit, but keep in mind, as I talk here, that none of the technologies I talk about affect students alone.

Perhaps the most obvious form of surveillance in schools involves those technologies designed to prevent or identify cheating. Indeed, if we expand our definition of "technology" to include more than just things with gears or silicon, we might recognize much of the physical classroom layout is meant to heighten surveillance and diminish cheating opportunities: the teacher in a supervisory stance at the front of the class, wandering up and down the rows of desks and peering over the shoulders of students. (Teachers, of course, know how to shift this physical setting — move the chairs around, for example. Teachers might be less adept or even able to do the same when the classroom setting is digital.)

Despite all the claims that ed-tech "disrupts," it is just as likely going to re-inscribe. That is, we are less likely to use ed-tech to rethink assignments or assessments than we are to use ed-tech more closely scrutinize student behavior.

Some of the earliest educational technologies — machines developed in the mid-twentieth century to automate instruction — faced charges that they were going to make it easier for students to cheat. If, as promised, these machines could allow students to move through course materials at their own pace without teacher supervision, there had to be — had to be— some mechanism to prevent deceptive behavior. As today, these technologies promised to "personalize" education; but that increased individualization also brought with it a demand to build into new devices ways to track students more closely. More personalized means more surveilled — we know this from Facebook and Amazon, don't we.

And this is key: the fear that students are going to cheat is constitutive of much of education technology. This belief dictates how it's designed and implemented. And in turn it reinforces the notion that all students are potential academic criminals.

For a long time, arguably the best known anti-cheating technology was the plagiarism detection software TurnItIn. The company was founded in 1998 by UC Berkeley doctoral students who were concerned about cheating in the science classes they taught. And I think it's worth noting, if we think about the affordances of technology, they were particularly concerned about how students were utilizing a new feature that the personal computer had given them: copy-and-paste. So they turned some of their research on pattern-matching of brainwaves to create a piece of software that would identify patterns in texts. And as you surely know, TurnItIn became a huge business, bought and sold several times over by private equity firms since 2008: first by Warburg Pincus, then by GIC, and then, in 2014, by Insight Partners — the price tag for that sale: $754 million. TurnItIn was acquired by the media conglomerate Advance Publications last year for $1.75 billion.

So we should ask: what's so valuable about TurnItIn? Is it the size of the customer base — the number of schools and universities that pay to use the product? Is it the algorithms — the pattern-matching capabilities that purport to identify plagiarism? Is it the vast corpus of data that the company has amassed — decades of essays and theses and Wikipedia entries that it uses to assess student work?

TurnItIn has been challenged many times by students who've complained that it violates their rights to ownership of their work. A judge ruled, however, in 2008 that students' copyright was not infringed upon as they'd agreed to the Terms of Service.

But what choice does one have but to click "I agree" when one is compelled to use a piece of software by one's professor, one's school? What choice does one have when the whole process of assessment is intertwined with this belief that students are cheaters and thus with a technology infrastructure that is designed to monitor and curb their dishonesty?

Every student is guilty until the algorithm proves their innocence.

Incidentally, one of its newer products promise to help students avoid plagiarism, and so essay mills now also use TurnItIn so they can promise to help students avoid getting caught cheating. The company works both ends of the plagiarism market. Genius.

Anti-cheating software isn't just about plagiarism, of course. No longer does it just analyze students' essays to make sure the text is "original." There is a growing digital proctoring industry that offers schools way to monitor students during online test-taking. Well-known names in the industry include ProctorU, Proctorio, Examity, Verificient. Many of these companies were launched circa 2013 — that is, in the tailwinds of "the Year of the MOOC," with the belief that an increasing number of students would be learning online and that professors would demand some sort of mechanism to verify their identity and their integrity. According to one investment company, the market for online proctoring was expected to reach $19 billion last year — much smaller than the size of the anti-plagiarism market, for what it's worth, but one that investors see as poised to grow rapidly, particularly in the light of the coronavirus pandemic.

These proctoring tools gather and analyze far more data than just a student's words, than their responses on an exam. They require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home.

The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example, their "suspicion" score goes up. The algorithm — sometimes in concert with a human proctor — decides who is suspicious. The algorithm decides who is a cheat.

We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.

We do know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this.

Of course, not all surveillance in schools is about preventing cheating; it's not all about academic dishonesty — but it is always, I'd argue, about monitoring behavior and character. And surveillance is always caught up in the inequalities students already experience in our educational institutions.

For the past few years, in the US at least, a growing number of schools have adopted surveillance technology specifically designed to prevent school shootings. In some ways, these offerings are similar to the online proctoring tools, except these monitor physical and well as online spaces, using facial recognition software and algorithms that purport to identify threats. This online monitoring includes tracking students' social media accounts, "listening" for menacing keywords and phrases. (These products are sold to schools in other countries too, not as school shooting prevention — that seems to be a grotesquely American phenomenon — but often as ways to identify potential political and religious extremism and radicalization among students.)

And there are plenty of other examples I could give you too, unfortunately, of how surveillance technologies permeate schools. Schools using iris-scanners in the lunchroom. Schools using radio-trackers on students' ID cards and monitoring students' mobile phones to make sure they're in class. And all this is in addition to the incredible amounts of data gathered and analyzed by the day-to-day administrative software of schools — from the learning management system (the VLE), the student information system, the school network itself, and so on. Like I said, not all of this is about preventing cheating, but all of it does reflect a school culture that does not trust students.

So, what happens now that we're all doing school and work from home?

Well, for one thing, schools are going to be under even more pressure to buy surveillance software — to prevent cheating, obviously, but also to fulfill all sorts of regulations and expectations about "compliance." Are students really enrolled? Are they actually taking classes? Are they doing the work? Are they logging into the learning management system? Are they showing up to Zoom? Are they really learning anything? How are they feeling? Are they "at risk"? What are teachers doing? Are they holding class regularly? How quickly do they respond to students' messages in the learning management system?

And this gets us back to something I mentioned at the outset: the surveillance of teachers.

For a very long time, the argument that many employers made against working from home was that they didn't trust their employees to be productive. The supervisor needed to be able to walk by your desk at any moment and make sure you were "gonna have those TPS reports to us by this afternoon," to borrow a phrase from the terrific movie Office Space. And much as education technology is designed on the basis of distrust of students, enterprise technology — that is, technology sold to large businesses — is designed around a distrust of workers. Again, there's a long history here — one that isn't just about computing. The punch clock, for example, was invented in 1888 by a jeweler William LeGrand Bundy in order to keep track of what time his employees came and left work. He and his brother founded the Bundy Manufacturing Company to manufacture the devices, and after a series of mergers, it became a part of a little company called International Business Machines, or IBM. Those "business machines" were sold with the promise of more efficient workplaces, of course, and that meant monitoring workers.

Zoom, this lovely piece of videoconferencing software we are using right now, is another example of enterprise technology. Zoom never intended to serve the education market quite like this. And there is quite a bit about the functionality of the software that reveals whose interests it serves — the ability to track who's paying attention, for example, and who's actually working on something else (a feature, I will say, that the company disabled earlier this month after complaints about its fairly abysmal security and privacy practices). Who's cheating the time-clock, that is. Who's cheating the boss.

Social media monitoring tools that are used to surveil students are also used to surveil workers, identifying those who might be on the cusp of organizing or striking. Gaggle, a monitoring tool used by many schools, wrote a blog post a couple of years ago in which it suggested administrators turn the surveillance towards teachers too: "Think about the recent teacher work stoppage in West Virginia," the post read. "Could the story have been different if school leaders there requested search results for 'health insurance' or 'strike' months earlier? Occasional searches for 'salary' or 'layoffs' could stave off staff concerns that lead to adverse press for your school district." In response to one wildcat strike at a US university earlier this month, the administration threatened those graduate student-instructors who had not logged into the learning management system with the loss of their stipends.

One of my greatest fears right now is that this pandemic strengthens this surveillance culture in school. And the new technologies, adopted to ease the "pivot to digital," will exacerbate existing educational inequalities, will put vulnerable students at even more risk. These technologies will for foreclose possibilities for students and for teachers alike, shutting down dissent and discussion and curiosity and community.

Too often in education and ed-tech, we have confused surveillance for care. We need to watch students closely, we tell ourselves, because we want them to be safe and to do well. But caring means trusting, and trusting means being able to turn off a controlling gaze. Unfortunately, frighteningly, it seems we are turning it up.

Educational Crises and Ed-Tech: A History

$
0
0

I was a guest at Desmos today. No, I won't speak at your startup.

Thank you very much for inviting me to speak to you today.

I want to talk to you today about the history of educational crises and education technology. I've said many times that I believe knowing that history is important, and not only in a George Santayana sense — "those who cannot remember the past are destined to repeat it."

Even when we don't know the history of something, we invent it — unconsciously perhaps — and that has its own, dangerous significance. That means, for one thing, you end up with a Secretary of Education or with a prominent investor/philanthropist or governor or pundit claiming that schools haven't changed in hundreds of years. You end up with a bunch of people repeatedly asserting that this particular crisis we now face — some 1.5 billion students in over 190 countries out of school due to the coronavirus — is "unprecedented," that schools have never before faced the challenges of widespread closures, that millions of students worldwide have never before found themselves without a school to attend.

In fact, in 2019, UNESCO estimated that 258 million children— about one in six of the school-age population — were not in school. Disease, war, natural disaster — there are many reasons why. Although we have made incredible strides in expanding access to education over the course of the past century, that access has always been unevenly distributed as are the conflicts and crises that undermine education systems and educational justice everywhere. This isn't just something that occurs "elsewhere." It happens here in California, for example. In this state, over the past three years, students have been sent home in record numbers as schools have had to close due to wildfires, shootings, and collapsing infrastructure — broken pipes, mold, lead in the water, and so on. Last year alone, some 1.2 million California students experienced emergency school closures.

There are precedents for what we are experiencing now — not just in the distant past or in some far away land.

I think that many people might be inclined to ask then, "Why were we so unprepared?" But the answers to that question are, of course, bound up in our beliefs and practices and expectations, in our assessments of what "preparedness" even means. Furthermore, even when we know a storm is coming, the most vulnerable among us are the ones least able to adequately prepare for disaster. We can think of "the most vulnerable" here as students, as families, as teachers, as communities, as schools — as our whole system of public education that has been hollowed out over the past few decades, thanks to recessions and budget cuts and anti-tax fervor and outsourcing. When we ask "why were we so unprepared?", we can consider perhaps that public education, particularly in some places, has been neglected since the very start.

Indeed, one might argue that education has always been in crisis. Certainly that's the story that we have long heard. "Our Nation is at risk," we were told in 1983, due to a failing public school system. President George W. Bush famously said in 2000, "Rarely is the question asked, is our children learning?" I'd say, to the contrary, the question has been asked incessantly and with much handwringing. And as we lurch from crisis to crisis — real crisis or manufactured crisis — technology has often been proposed as the fix, "the solution."

Notice the verb in that last sentence: "technology has often been proposed as 'the solution.'" Passive voice. Bad form for a writer. Passive voice obscures the actor. And we surely need to ask, who proposes technology as the solution? Whose fix is this? Who repeats these narratives of crisis, and who defines the problems, and who determines the appropriate technological interventions?

"You never want a serious crisis to go to waste," as Rahm Emanuel, then President Obama's chief of staff, said in 2009 as the world teetered on the brink of recession. "And what I mean by that," he continued, "is an opportunity to do things that you think you could not do before." A crisis is an opportunity to set (or reset) the agenda. We have seen this repeatedly in education. Like, say, the closure of public schools in New Orleans following Hurricane Katrina and their replacement with charters. "Disaster capitalism," as Naomi Klein has described it.

(And yes, I think we can talk about charter schools as a technology.)

We can look at the history of education technology and see the ways in which crises have been leveraged to encourage the adoption of new media: Sputnik is the most famous of these crises perhaps, prompting a huge push for better science and math education but also for more machinery to administer it; but we can also look at rhetoric around teacher shortages, snow days, standardized testing, school shootings, and so on. And yes, pandemics.

Due to a polio outbreak in the fall of 1937, the Chicago Public Schools (CPS) decided to postpone the start of the school year. As school buildings were shuttered, the district turned to the radio to replace classroom instruction.

Radio wasn't entirely new to CPS. It had started offering some educational programming on the radio as early as the 1924-25 academic year — Little Red School House of the Air, broadcast twice a day with host Dr. Ben Darrow on station WLS, for example. But the school closures of 1937 — from September 13 thru 27 — provided a very different opportunity to experiment with the technology. A very different and much more expansive endeavor.

With the help of six newspapers and seven radio stations, the district offered broadcasts in four subjects — math, English, science, and social studies — for grades 3 through 8. As the Superintendent William Johnson wrote in an article in The New York Times a week after the experiment launched,

So far as possible, fifteen minutes' time is given to each broadcast above the fourth grade. Beginning with a health and physical education broadcast for all the children from 7:15 to 7:30 in the morning, regular lessons go on the air to different grades between the hours of 8:15 and 11:45 in the morning and from 4 in the afternoon to 7 in the evening. Grades 7A, 8A and 8B each have broadcasts in two studies daily, and grades 3 through 7B have one each."

"Directions, questions and assignments" were printed in the newspaper each day, he explained, and students were instructed to keep all their written work to submit to their teachers when schools finally opened.

Superintendent Johnson reported that the move to radio instruction was embraced enthusiastically by students — some 315,000 tuned in, he estimated, although he admitted he had no way of knowing for sure. Parents were pleased too, he said, although they flooded the district offices with phone calls distressed that their child might have missed a broadcast or misunderstood a lesson. One common complaint: the radio broadcasts moved too fast. There was no way to pause, to repeat, to move through the lesson more slowly.

Polio primarily struck children, not adults, remember. So even though the schools were closed due to the outbreak, workplaces were not. That meant not every child in Chicago had a parent at home to help supervise their schoolwork. And without that discipline, the Chicago Daily Tribune feared, "it is probable that the students who benefit by the radio lessons will be those who need them the least and would suffer least by curtailment of their classroom instruction."

There were other inequalities. While one parent boasted that they'd put radios in separate rooms so that their two children could listen to separate broadcasts on separate stations simultaneously, many families didn't have a radio at all. Roughly 60% of households owned a radio. Even if they had a radio, some students found themselves with poor reception and were unable to tune in to all the lessons on all the stations. And some Chicago students had been shipped out of the city altogether to avoid the outbreak and couldn't listen to the broadcasts at all.

The 1930s were the "Golden Age of Radio," and arguably the public had grown accustomed to radio as a new form of entertainment. Some worried that those expectations would force educators to "conform to the best practices of show business." Celebrities, such as the British explorer Carveth Wells, were invited on air to speak to students. One commentator in the Chicago Daily Tribune praised this "sugar coating" of radio lessons to make them more palatable — certainly Wells' dinner hour broadcast about hunting elephants in Africa and India, he argued, "made more of a hit with third and fourth graders than if it had insisted on drills in the multiplication tables."

It was obvious too that radio had all sorts of other constraints — pedagogical constraints. You can only hear the teacher. You can't see facial expressions or gestures. And more importantly, the teacher can't see you. The teacher can't tell if you are bored or confused. "Broadcasts, in a sense," wrote Larry Wolters in the Chicago Daily Tribune, "assume all minds are of equal comprehension potentiality." That's 1937 talk, I guess, for "one-size-fits-all." Mass media meant mass education. Even in the 1930s, people frowned at this, believing that education should be individualized.

(So no, Khan Academy did not invent "personalized learning." Yes, today's philanthropic foundations have been pushing another crisis narrative that proposes "personalized learning" is the solution.)

As with so many new technologies forced into the classroom, the experiment deepened many teachers' "feelings of insecurity and fear that radio, this new technology, might one of these days take away their jobs." The effects of the Great Depression lingered on into the late 1930s, after all. The radio, Superintendent Johnson tried to reassure teachers, would never replace them; but during a crisis, the technology was necessary. "We are convinced that no mechanical device can be successfully substituted for the teacher-personality and the pupil-teacher relationship," he wrote. "In this emergency, however, the value of the newspaper and the radio service to the children of Chicago cannot be overestimated."

Or perhaps that value could be.

At the end of the two-week experiment, when the Chicago schools re-opened, the radio lesson programming abruptly ended, and the Chicago Radio Council concluded that the results were "not particularly satisfactory." Despite Johnson's pronouncements that almost every student in the district had tuned in, "only half of the pupils in the Grades 4-8 listened to the broadcast," the council found. Furthermore, the "test scores of these listeners were not impressively different from those of the non-listeners."

A nice supplement for some students. But no replacement for school.

I'll pause here for a second, just so we can all marvel at this story — not so much that the Chicago Public Schools tried to implement "remote teaching" almost one hundred years ago during an emergency school closure due to disease. But that here we are again. The challenges and the results and the arguments (for and against) and the innovative superintendent boasting in The New York Times without any data to substantiate his claims — it all sounds so incredibly familiar.

Perhaps the question then isn't simply "how do we learn from Chicago 1937" but rather "why haven't we?"

One reason we didn't was because the hype around radio dissipated fairly quickly, supplanted by the promise of the next big thing: television. (The first television broadcasts began in the US in the 1920s, but it wasn't until after World War II when TV sets became ubiquitous in homes and in schools.) Video, as they say, killed the radio star.

There were a number of crises in education following the war: the "baby boom" led to massive overcrowding; there were teacher shortages in many schools; and the launch of the Soviet satellite Sputnik in 1957, of course, prompted Americans to fear that the country had fallen behind, particularly when it came to science and math education. My forthcoming book, Teaching Machines focuses — as the title suggests — examines the push to adopt teaching machines during this period. Don't worry, I'll be back to talk more about that when the book comes out. For now, today, I'm going to talk about TV.

I've written about a couple of the more high-profile implementations of television in schools the 1950s and 1960s on Hack Education. The push for teaching by television in America Samoa, for example — one of those stories you can point to when people insist "oh, we will never let technology to replace teachers." In American Samoa, that is certainly what they tried to do. The story of the MPATI, the Midwest Program on Airborne Television Instruction in which two DC-6 planes flew over schools in Indiana, Illinois, Kentucky, Michigan, Ohio, and Wisconsin, broadcasting televised lessons via Stratovision — supplemental instruction for rural students, not a replacement for human instruction. Because definitely running two giant planes as a television broadcasting service makes sense.

Both of these stories should prompt us to ask which students get TV and which students get teachers, in which schools do we expand and enhance human capacity for teaching and learning and in which do we buy machines?

One thing that these two projects shared — in addition to just being really dubious ideas — was their connection to the Ford Foundation, one of the most important philanthropic organizations in the history of ed-tech. (The Ford funded the MPATI; the federal government funded the television experiment in American Samoa, but the Ford Foundation administered the program.)

Ford Foundation money was essential in making educational television more than just a "chic gimmick," historian Larry Cuban has argued. In his book Teachers and Machines, he asserts that "few technological innovations have received such a substantial financial boost from a private organization as classroom television did throughout the 1950s and early 1960s." (I will note here that Teachers and Machines was published in 1986, and we might consider how the funding landscape has changed since then. The Bill & Melinda Gates Foundation, for example, was launched in 2000 — some fourteen years after Cuban's book was published and some thirteen years after Gates became a billionaire.)

"By 1961," Cuban writes,

over $20 million had been invested in 250 school systems and fifty colleges across the country by the Ford Foundation's Fund for the Advancement of Education. Federal aid had entered the arena of instructional technology with the passage of the National Defense Education Act (NDEA) in 1958. In 1962, President Kennedy secured an appropriation from Congress that authorized the U.S. Office of Education to plow $32 million into the development of classroom instruction. By 1971, over $100 million had been spent by both public and private sources.

($20 million in 1961 is about $175 million today. $32 million in 1962 is about $273 million today. $100 million in 1971 is about $637 million today.)

An aside: one of the reasons that teaching machines "failed*" was that the big philanthropic dollars did not flow into their development to the degree in which those dollars flowed into educational television. With its deep pockets, the Ford Foundation steered the direction of education technology — and it hoped, the direction of education itself.

(*Teaching machines didn't really "fail" incidentally, but that's a story for another day.)

The Washington County Public Schools were arguably the best known experimental site of televised instruction — better known as the Hagerstown, Maryland CCTV project. In the 1950s, the district proposed to the Ford Foundation that it use closed-circuit television in order to address overcrowding in its schools as well as the lack of certified teachers.

The Hagerstown CCTV project began in 1956 with a $1.5 million grant from the foundation. (That's about $14 million today.) At its height, television was used for daily instruction for some 18,000 students in the county. In elementary schools, between 7 to 13% of instructional time was spent watching television. Junior high students watched TV for about a third of their school day; for high school students, it was about 10%. At the junior high and high school level, these viewings often took place in rooms with over one hundred students. Like a MOOC, but in the school gym.

In the late 1950s, about 70% of US households owned a television. So Hagerstown was seen as innovative, and there was a ton of press for the television project — every superintendent's dream. Reports were mostly positive — students, teachers, and parents seemed pleased with instructional television. Test scores in Hagerstown went up.

But in 1961, the Ford Foundation decided not to renew its financial support for the project. Although televised instruction continued without the Ford funding, the costs shifted to taxpayers. And the costs were significant. $150,000 a year to lease coaxial cables from Bell Telephone Company, for example.

There were additional labor costs too, as well as technical ones. The project employed one coordinator, one instructional supervisor, a staff of twenty-five "master teachers," a production team of thirty, an engineering team of eight, and several clerical staff members. Television had promised to save money and reduce the need for trained personnel. It failed on both accounts. And not only was there a demand for new kinds of staff focused on developing and delivering television programming, but teachers felt their work changed as well — both in the classroom and, when asked, on screen. In the end, many classroom teachers reported that the new technology required more work. TV was not as labor-saving as promised.

The Hagerstown story is fairly well known, even though it was an anomaly. Few school districts in the US ever embraced instructional television to remotely the same degree. But that's how ed-tech PR works, isn't it — one anecdote becomes a trend if you can get Edsurge to write about it as such.

Less well known was an experiment the Ford Foundation funded at the same time in the Chelsea neighborhood of New York City — less well known, perhaps, because it wasn't focused on education technology in the classroom but instead on education technology at home, It hoped to extend education into the home — a topic that is painfully relevant right now and is just as fraught as ever.

This project, the Chelsea CCTV project (CCCTV) differs in some key ways from those "remote learning" plans in Chicago in 1937 that seemed to entirely overlook questions of inequality and access. Indeed, CCCTV, which ran from 1957 to 1961, aimed specifically to address some of the inequalities faced by residents in the neighborhood. Although let's be clear: these were the inequalities as determined by the New York Department of Education, researchers at Harvard, and funders at the Ford Foundation. CCCTV's mission was to "explore the values of CCTV as a service to a low-income, poly-lingual, urban neighborhood and its public school" by sending programs "to the school and to the residents of a public housing unit."

That public housing unit was the John Lovejoy Elliott Houses, where about 600 families resided, about half of whom were Puerto Rican and about a third of whom were Black. One of the main goals of the project was to teach English to the Spanish-speaking residents of the housing complex. Indeed, for the first few weeks of the project, language lessons made up the entirety of the broadcast schedule. Other programs were eventually added: Teamwork for Child Health, a 15-minute program on parent and child health, for example, as well as shows designed to inform the neighborhood about local news and activities.

Each of the apartments in the housing complex was wired to the main CCTV antenna, so ostensibly anyone with a television could watch Channel 6, the channel designated for the educational programming. Reception was sometimes spotty. The shows were fairly low budget, many filmed in the neighborhood school. And the total viewership was never great. The residents of the Elliott Houses looked at the project with suspicion from the outset, often not distinguishing the CCTV initiative from the local Housing Authority. When the Chelsea project staff broke two television sets during installation, rumors circulated in the building that the closed-circuit wiring would damage people's equipment. No one wants their landlord to tell them what to watch on TV.

The project failed to appeal to the neighborhood in any substantial way, often underscoring existing fractures in the community. Residents who spoke English, for example, felt that the amount of English-language learning shows meant that the programming really wasn't meant them. Spanish-speaking residents felt that the English-language learning classes were too basic — they were designed for elementary school students after all. Some felt as though, by offering adult education using content aimed for children, the program staff revealed their contempt for Puerto Ricans.

These suspicions about the project were exacerbated by some of the other programming too. Much of the it involved topics that CCCTV staff felt that people living in a housing project would find useful but it failed to account for what residents would actually like to watch. There was a cooking show called The Recipe Box which was designed to help "residents get the most out of their food dollar in terms of nutrition, economy, and time management." Fairly patronizing. Another show called So You Want to Get a Job introduced various occupational training programs available at vocational high schools in New York. Of course, many in the housing complex already had jobs — another not-so-subtle message about how the project staff imagined the audience.

The CCCTV programming was revamped several times in response to declining viewership, each time moving more and more towards entertainment rather than educational programming. The final report on the project, published by the New York Board of Education, noted that a "growing number of tenants… had disconnected Channel 6 from their TV receivers." In the end, the project had a "tarnished image" with the community. In some key ways, rather than strengthening the neighborhood, as the project had intended, it served to undermine the sense of community — to underscore differences when the programming had hoped to showcase diversity, to prompt people to withdraw to their living rooms rather than engage with neighborhood activities. The principal of the neighborhood school reported, for example, that, following the broadcast of PTA meetings in a show called PS 33 Highlights, in-person attendance declined.

The final report on the project blamed the medium of television, and it blamed the neighborhood residents for this complacency: "the use of television encourages people to sit at home in front of the television screen rather than to come out and join in." There was a "growing awareness of a grass-roots conflict between the fundamental purpose of the community effort and the proposed medium, television," the report read. "There is in the very nature of television, as in other mass media, the danger of a certain authoritarianism, an imposition from without upon a passive receptor. But community growth comes from within and involves participation on the part of the people. Television often tends to foster in the viewer a passive role which is at odds with the sound educational concept of learning by doing."

Many of the staff who worked on the television project were frustrated that the residents of Elliott House were not more grateful for the connection to the master antenna that broadcast the closed-circuit programing. It was, according to the final report, "a free gift with no strings attached. The staff assumed that the recipient of this gift, if he reacted at all, would feel a sense of sympathy for the project or even of gratitude. They were unprepared for a groundswell of suspicion and resentment."

But no one asked those residents what they wanted. No one asked them what they valued. No one asked them what they were curious about. No one asked them what their community needed, what their community meant. No one considered, "Hey, maybe folks don't want this in their home because neither the Housing Authority nor the school system are seen as particularly trustworthy institutions?" ("No one" is a bit unfair, perhaps, as there was some input from the Housing Guild on the CCCTV project.)

In certain ways, the initiative was a disciplinary technology — residents were uncomfortable with that for good reason. There's no irony lost that closed-circuit television now means surveillance cameras. These cameras cover the Elliott Houses complex today.

The CCCTV project had identified and defined a "crisis" — low-income residents are under-informed and under-educated, acculturated to the wrong beliefs and practices. And the Ford Foundation was willing to fund the solution as it was a project that fit perfectly with its belief that educational television could "uplift" and reform society. As Laurie Ouellette writes in her book Viewers Like You, "The Ford Foundation envisioned television as a conduit for culture and adult education, a vision accountable not to the public but to the priorities established by its white, male, upper-class trustees." That is key — accountable not to the public but to the priorities established by funders.

What eventually emerged from the projects I've talked about today was, of course, public radio and public television — WBEZ in Chicago and WNET in New York for starters — arguably the most significant educational media to this day.

I started off this talk with a look at how the Chicago Public Schools turned to radio-based instruction in 1937 when a polio outbreak forced them to close schools. No doubt that's the kind of history that we probably turn to when we want to consider how schools have responded to crises — and responded using education technology — in the past. We can readily see the inequalities — the failure to make sure all students had the proper device, the failure to account for whether there was a parent at home able to mentor and teach. And we understand the relevance of this story to our decision-making today.

But I want us to think a little bit more broadly about "crisis" too, about narratives about crisis shape the direction of education policy, about how the values and priorities of foundations tap into these narratives to further their agenda. After all, just yesterday, New York Governor Cuomo announced that he planned, post-coronavirus, to work with the Gates Foundation to "reimagine education" in his state.

I want us to think about what it means for education technology — in this crisis or any "crisis" — to permeate people's homes. Education technology has been offered by its funders as the solution to educational crises for a century now. Look where that's got us.

'All Watched over by Machines of Loving Grace': Care and the Cybernetic University

$
0
0

I gave this talk this morning at the Academic Technology Institute.

Everyone is in crisis. I want to recognize that at the outset. Some crises may be deeper, more long lasting; some may be hidden, unspoken, unspeakable; some might seem minor, but loom monstrously; some may be ongoing; some may be sudden. Some might seem surmountable, but roar back into renewed disaster. Few of these will be resolved anytime soon.

It is very challenging, in the midst of all this crisis — personal crisis, medical crisis, mental health crisis, financial crisis, political crisis, institutional crisis, societal crisis — to offer you a message this morning that is insightful, realistic, necessary, and hopeful, although certainly that's the task many speakers aspire to.

Like many Americans, I tuned in a week or so ago to listen to President Obama deliver his graduation speech to the nation, knowing that it wasn't just a speech for those leaving high school. Graduation speeches never are, of course, just for the graduates. They're also for the parents and grandparents and siblings and teachers. They're a way that the community marks the transition — into adulthood, in some cases, but always looking forward into "what's next."

"What's next?" Right now, we don't know. We never do, really. Right now, instability and uncertainty fuel our crises — individually and collectively. Will schools be open in the fall? Will there be face-to-face classes? Will there be adequate testing? Will there be a vaccine? Will you have a job? Can you survive? Can I? What does one say to an audience in the face of all this?

"Don't be afraid," President Obama told graduates. "Do what you think is right," he said, not what feels good. "Build community." These are fine exhortations, I suppose (although I think it's perfectly normal to be afraid).

I will never be a graduation speaker, I'm certain. It's not that I can't deliver something pithy with one or two lines that make for great tweets. It's not that I don't try to inspire my audiences to go out and make change, make the world a better place. But my talks aren't reassuring or congratulatory. And my messages about refusal, luddism, pigeons, critical theory, and historicism aren't really the sort of thing that administrators go for for ceremonial occasions. Thank you, by the way, for inviting me to speak to you today.

I'd like to imagine this talk, nonetheless, as one connected to the genre of "graduation speech." It's the end of the school year, so why the hell not.

Insofar as that genre is about culmination, about welcoming students into the world, I want to offer today a provocation about welcoming students (and staff) back into educational institutions — whether on campus or not. Because just as we are sending students out into an incredibly precarious world, we are also bringing them into an incredibly precarious higher education system. It's a system that has, perhaps more than ever, had its inequalities and injustices exposed. Indeed, we can say that for the whole damn world.

These inequalities and injustices are not new. I want to make that clear. They are exacerbated, no doubt, by the global pandemic and economic depression.

If there is one message that I want to get across to you today, it is that we must ground our efforts to plan for the fall — hell, for the future — in humanity, compassion, and care. And we cannot confuse the need to do the hard work to set institutions on a new course of greater humanity with the push for an expanded educational machinery. We have to refuse and refute those who argue that more surveillance and more automation is how we tackle this crisis, that more surveillance and AI is how we care.

We can trace the histories of our schools, our beliefs and practices about teaching and learning, our disinvestment in public institutions, our investments in technological solutions to discover how and why we got here — to this moment where everything is falling apart and the solution (from certain quarters) is software that sounds like "panopticon."

It's that last bit — the histories of our investment in technological solutions (and our faith in technological solutions even when they are obviously so utterly dystopian) — that is really the focus of my work.

In his book From Counterculture to Cyberculture, historian Fred Turner examines the influence of Stewart Brand and the Whole Earth Catalog on the the emergence of Silicon Valley as the center of technological development in the US in the 1960s. Turner uses a poem by Richard Brautigan, printed and handed out on broadsheets in the Haight-Ashbury district of San Francisco in 1967, to illuminate how the counterculture came to embrace a technocratic vision of the future and how, in turn, technologists came to view computers as tools of personal liberation, how the technocrats came to believe their disruption makes them radicals, revolutionaries.

As I sat down to prepare this talk, I thought immediately of that poem by Brautigan, "All Watched Over By Machines of Loving Grace":

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

The poem imagines a world in which nature and machines have merged — cybernetic meadows, cybernetic forests, cybernetic ecology — seemingly without environmental destruction. Pure water, clear sky. It's a world in which machines have advanced enough that human labor is no longer necessary.

As we look around us today, I think that there is probably a great appeal to this vision of a "green technology." Certainly there are plenty of Covid-related reasons why we might want all humans to stay at home and let the machines work for us. This would require, of course, a complete rearrangement of our economic system — a rearrangement that many politicians are clearly unwilling to embrace: pay people to stay home.

But those last two lines — "all watched over by machines of loving grace" — always turn my stomach. And it's those last two lines that came to mind when I thought about the ways in which college administrators and professors and staff are going to be asked to handle this moment, pressed to adopt more technology, to "do more with less," to automate more tasks, to utilize more analytics. "All watched over by machines of loving grace."

Certainly the machines that we have set forth to watch over students — and even that verb hints at a terrible practice — are not filled with "loving grace." Even if we anthropomorphize the metal as mentor, these machines have no dignity, no decency. They are extractive, mining students' personal data. They are exploitative, selling and sharing this data often without students' knowledge or consent. They are only "loving" if you believe that surveillance, coercion, and discipline are the basis of love. And good grief, no one would ever describe the learning management system, the student information system, or the vast majority of education technology tools as graceful. They're clunky and unwieldy. They suck.

Nonetheless, Brautigan's 1967 poem does capture a technological utopianism that remains powerful, even pervasive — in education technology circles, in Silicon Valley, and even in society in general. That utopianism has become deeply embedded — hard-coded, if you will — in our relationship with technology over the course of the past fifty or sixty years. It's no surprise then that some would see the baby monitor, the predictive algorithms, the online proctoring software, and the like as "machines of loving grace."

There have always been critics of this machinery. Hannah Arendt. Jacques Ellul. Lewis Mumford. Their criticisms accompanied the development, following World War II, of computing, cybernetics, and artificial intelligence. They cautioned about the relationship between computing and warfare. They cautioned about the metaphors of machinery, the mechanization of society. They cautioned that unfettered optimism about these machines obscured power relations. "The myth of technological and political and social inevitability," Joseph Weizenbaum wrote in 1976, "is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes it."

There were plenty of predictions about the inevitability of artificial intelligence — then as now. Carnegie Mellon professor Herbert Simon, for example, had boasted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do," (a man can do — more on that in a minute) and MIT professor Marvin Minsky had said in 1967 "within a generation… the problem of creating 'artificial intelligence' will substantially be solved." It's Herbert Simon, incidentally, who is often invoked these days when people argue that the job of instructional designer and instructional technologist should be rebranded as "learning engineer," a phrase Simon used to describe his vision for a mechanized university administration.

I am particularly interested in the development of computing and education technologies in post-war America because they occurred in such tumultuous times on campuses — both K-12 and colleges — something that we (in education technology at least) seem to rarely consider. You can read far too many stories about the development of ed-tech that fail to mention Brown v Board of Education or the Little Rock Nine. I don't think I've seen Mario Savio's famous speech in 1964 on the steps of Sproul Hall at UC Berkeley mentioned in a history of teaching machines — well, except in the book I've just written which will be out next year:

There's a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can't take part! You can't even passively take part! And you've got to put your bodies upon the gears and upon the wheels ... upon the levers, upon all the apparatus, and you've got to make it stop! And you've got to indicate to the people who run it, to the people who own it, that unless you're free, the machine will be prevented from working at all!

This machine — the university machine — is not a machine of loving grace. "Do not fold, spindle, or mutilate," student protestors exclaimed, demanding at least the same level of care afforded the IBM punch card. The draft and the student information system used the same machinery, after all.

But just as students (and others) identified and protested the burgeoning educational technocracy, recognizing how it reduced them from humans to data, many in universities were happily embracing and entrenching it, particularly those in and around the nascent fields of cybernetics and artificial intelligence who sought to systematize and to mechanize the mind.

This mechanization, I believe — metaphorically or literally, however you choose to read it — is why we must refuse to move farther along a path that equates teaching and learning with computation. More Mario Savio please. Less machine learning.

"Can a machine think?" Alan Turing famously asked in 1950. But rather than answer that question, Turing proposed something we've come to know since as the Turing Test. His original contrivance was based on a parlor game -- a gendered parlor game involving three people: a man, a woman, and an interrogator.

This imitation game is played as follows: the interrogator cannot see the man or woman but asks them questions in order to identify their sex. The man and woman respond via typewritten answers. The goal of the man is to fool the interrogator. Turing's twist: replace the man with a machine. "Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" he asked.

The question is therefore not "can a machine think?" but "can a machine fool someone into thinking it is a woman?"

What we know today as the Turing Test is not nearly as fascinating or as fraught -- I mean, what does it imply that the bar for intelligent machinery was, for Turing, to be better at pretending to be a woman than a man is? What would it mean for a machine to win that imitation game? What would it mean for a machine to fool us into believing, for example, that it could perform affective labor not just computational tasks?

Perhaps it's less that the machine can or might fool us, and more that, when we consider the question "can a machine think" today, our definition of thinking has become utterly mechanistic — and that definition has permeated our institutional beliefs and practices. It is the antithesis of what Kathleen Fitzpatrick has called for in her book Generous Thinking— what she describes as "a mode of engagement that emphasizes listening over speaking, community over individualism, collaboration over competition, and lingering with ideas that are in front of us rather than continually pressing forward to where we want to go." Generous thinking is something a machine cannot do. Indeed it is more than just an intellectual endeavor; it is a political action that might take scholarly inquiry in the opposite direction of technocratic thought and rule. "We have," as Joseph Weizenbaum wrote, "permitted technological metaphors… and technique itself to so thoroughly pervade our thought processes that we have finally abdicated to technology the very duty to formulate questions."

In 1972, MIT professor Hubert Dreyfus published a book highly critical of artificial intelligence — What Computers Can't Do— lambasting those researchers who made these grandiose promises about their work. Dreyfus argued that optimism about the capabilities of AI was unwarranted. AI researchers were working with a very limited notion of "intelligence" — the idea that the human brain is an information processor, and idea that was flawed and misleading. To be "intelligent," Dreyfus wrote, a machine "must only compete in the more objective and disembodied areas of human behavior, so as to be able to win at Turing's game."

Hubert Dreyfus's criticisms of AI were roundly dismissed by almost everyone working in the field at the time (although he has since been largely vindicated). He complained that none of his MIT colleagues would eat lunch with him — very little generosity among the MIT faculty, no surprise — except, that is, for Joseph Weizenbaum.

Weizenbaum, for his part, was a computer scientist (Dreyfus was a philosopher). He was one of the founders of the field of artificial intelligence and with the publication of Computer Power and Human Reason in 1976, one of its most vocal critics. While Dreyfus wrote about "what computers can't do," Weizenbaum was more interested instead in "what computers ought not do."

Weizenbaum had developed in the mid-1960s one of the best known chat-bots. Its name: ELIZA — yes, named after Eliza Doolittle in George Bernard Shaw's play Pygmalion, taught to speak with an upper-class accent so she can "pass."

The program ELIZA ran a script simulating a psychiatrist. A "parody," Weizenbaum wrote. "The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patient's statements back to him."

"Hello," you might type. "Hi," ELIZA responds. "What is your problem?" "I'm angry," you type. Or perhaps "I'm sad." "I am sorry to hear you are sad," ELIZA says. "Do you think coming here will make you happy?" "Well, I need some help," you reply. "What would it mean for you to get some help?" ELIZA asks. "Perhaps I wouldn't fight so much with my mother," you respond. "Tell me more about your family," ELIZA answers. The script always eventually asks about family, no matter what you type, not because it understood trauma but because it's been programmed to do so. That is, ELIZA was programmed to analyze the input for keywords and to respond with a number of canned phrases, that contained therapeutical language of care and support — a performance of "intelligence" or intelligent behavior, if you will, but just as importantly perhaps a performance of "care." The verbal behavior of "loving grace," perhaps.

Weizenbaum's students knew the program did not actually care. Yet they still were eager to chat with it and to divulge personal information to it. Weizenbaum became incredibly frustrated by the ease with which this simple program could deceive people — or the ease with which people were willing to go along with the deception, perhaps more accurately. When he introduced ELIZA to the non-technical staff at MIT, they treated the program as a "real" therapist. When he told a secretary that he had access to the chat logs, she was furious that Weizenbaum would violate her privacy — violate doctor-patient confidentiality — by reading them.

Weizenbaum was dismayed that so many practitioners — both of computer science and of psychology — embraced ELIZA and that they argued the program demonstrated that psychotherapy could be fully automated. There were, after all, a shortage of therapists, and automation would make the process much more efficient as no longer would the field be limited by the one-to-one patient-therapist ratio. Weizenbaum balked at this notion. "What must a psychiatrist who makes such a suggestion think he is doing while treating a patient," he wrote, "that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?"

Let's rewrite Weizenbaum's question for education, for the use of automation in teaching and counseling: "What must a professor or administrator who makes such a suggestion think he is doing while working with a student that he can view the simplest mechanical parody of a single pedagogical technique as having captured anything of the essence of a human encounter?"

I've written before about the legacy of ELIZA in education, about the chat-bots that have been developed for use as "pedagogical agents." These programs were often part of early intelligent tutoring systems and, like ELIZA, were designed to respond helpfully, encouragingly when a student stumbled. Machines of loving grace. The effectiveness of these chat-bots is debated in the research (what do we even mean by "effectiveness"), and there is incomplete understanding of how students respond to these programs, particularly when it comes to vulnerability and trust, such core elements of learning.

Are chat-bots sophisticated enough to pass some sort of pedagogical Turing Test? (Is that test, like Turing's imitation game, fundamentally gendered?) Or rather is it, as I fear, that folks have decided they just don't care. They do not care that the machines do not really care for students, as long as there's an appearance of responsiveness. Indeed, our educational institutions, particularly at the university level, have never really cared about caring at all. And perhaps students do not care that the machines do not really care because they do not expect to be cared for by their teachers, by their schools. "We expect more from technology and less from each other," as Sherry Turkle has observed. Caring is a vulnerability, a political liability, a weakness. It's hard work. And in academia, it's not rewarded.

So what does it mean then if we offload caring and all the affective labor - the substantive and the performative - to technology? To machines of loving grace. It might seem counterintuitive that we'd do so. After all, we're often reassured that computers will never be able to do that sort of work. They're better at repetitive, menial tasks, we're told — at physical labor. "Any work a man can do," as Herbert Simon said.

And yet at the same time, plenty of us seem quite happy to bear our souls, trust our secrets, be vulnerable with and to and by and through our machines. What choice do we have, we're now told. So what does that mean for teaching and learning, now we're told (and possibly resigned to the fact) the machines of loving grace are going to be compulsory.

These are not technical questions, although there are vendors lined up with technical solutions. They are political questions. And they are moral questions. As Weizenbaum wrote, "the question is not whether such a thing can be done, but whether it is appropriate to delegate this hitherto human function to a machine."

Folks, it is not appropriate to delegate the human function of education to a machine. In education there are no machines of loving grace. We must rethink and reorient our institutions away from that fantasy, from the desire to build or buy them.

We must, as Weizenbaum wrote, "learn to say 'No.'" And we must, as Donna Lanclos has written, pay attention to others' refusal as well. Refusal is different than resistance, she argues. Refusal means "not participating in those systems, not accepting the authority of their underlying premises. Refusal happens among people who don't have access to structural power. Refusal is a rejection of framing premises. Recognizing refusal requires attention, and credit to tactics such as obfuscation, or deliberate misinterpretation." Refusal means undermining and unwinding decades of a cybernetic university and building something else in its place.

I don't mean here that we should refuse online education, to be clear. I would rather faculty and students and staff be online than dead. I care. But what I do mean is that we need to resist this impulse to have the machines dictate what we do, the shape and place of how we teach and trust and love. We need to do a better job caring for one another — emotionally, sure, but also politically. We need to recognize how disproportionate affective labor already is in our institutions, how disproportionate that work will be in the future. We need to agitate for space and compensation for it, not outsource care to analytics, AI, and surveillance.

We must refuse to be watched over, to have students and staff watched over by machines of purported loving grace. We must put our bodies upon the gears and upon the wheels and make the machines stop.


Ten Years of This?!

$
0
0

Ten years ago today, I bought the domain hackeducation.com and launched this website, determined to do a better job covering the revival of ed-tech startups than the technology press did. Hack Education has changed a lot in the past decade. I no longer model this site on those publications. I've turned toward criticism and analysis and away from reporting. Hack Education remains, I think, a chronicle of the last ten years in education technology. It's become a record of the many public talks I've given. I am so proud of this work.

I do recognize that my updates here have been infrequent in the last year or so, as I've focused on my book. I should have written something longer today, worthy of commemorating this anniversary. But I haven't. (I will have a newsletter out this weekend, hopefully. Maybe.)

Instead, I want to thank you for all your support.

Image credits: Bryan Mathers

The Ed-Tech Imaginary

$
0
0

I gave this keynote this morning at the ICLS Conference, not in Nashville as originally planned (which is a huge bummer as I really want some hot chicken)

As I sat down to prepare this talk, I had to go back into my email archives to see what exactly I'd said I'd say. It was October 29, 2019 when I sent the conference organizers my title and abstract. October 29, 2019 — lifetimes ago. Do you remember what happened on October 29? Lt. Col. Alexander Vindman testified in front of Congress about President Trump's phone call with the leader of the Ukraine. Lifetimes ago. I'm guessing there are plenty of you who submitted your conference papers in the "Before Times" and now struggle too to decide if and how you need to rethink and rewrite your whole presentation.

I do still want to talk to you today about "the ed-tech imaginary," and I think some of the abstract I wrote last year still stands:

How do the stories we tell about the history and the future of education (and education technology) shape our beliefs about teaching and learning — the beliefs of educators, as well as those of the general public?

I do wonder, however, if or how much our experiences over the past four months or so have shifted our beliefs about the possibilities of ed-tech — our experiences as teachers and students and parents certainly, but also our experiences simply as participants and observers of the worlds of work-from-home and Zoom-school. Do people still imagine, do people still believe that technology is the silver bullet? I don't know. I do know I hear statements like this a lot: "I can't imagine how we will go back to face-to-face classes in the fall." And at the same time, I hear this: "I can't imagine there won't be football." What do we now imagine for the future?

I'd like to think that it's not just the pandemic that has changed us and changed our expectations of school and of ed-tech. So too, I hope, have the scenes of racist police brutality and the protests that have arisen in response. Black lives matter.

We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?

"To see things as they really are," legal scholar Derrick Bell reminds us, "you must imagine them for what they might be."

I write a lot about the powerful narratives that shape the ways in which we think about education and technology. But I won't lie. I tend to be pretty skeptical about exercises in "reimagining school." "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.

We must recognize that the imagination is political. And if Betsy DeVos is out there "reimagining," then we best be resisting not just dreaming alongside her.

The "ed-tech imaginary," as I have argued elsewhere, is often the foundation of policies. It's certainly the foundation of keynotes and marketing pitches. It includes the stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed. Tall tales about "factory-model schools" and so on.

Despite all the talk about our being "data-driven," about the rigors of "learning sciences" and the like, much of the ed-tech imaginary is quite fanciful. Wizard of Oz pay-no-attention-to-the-man-behind-the-curtain kinds of stuff.

This storytelling is, nevertheless, quite powerful rhetorically, emotionally. It's influential internally, within the field of education and education technology. And it's influential externally — that is, in convincing the general public about what the future of teaching and learning might look like, should look like, and making them fear that teaching and learning today are failing in particular ways. This storytelling hopes to set the agenda.

In a talk I gave last year, I called this "ed-tech agitprop" — the shortened name of the Soviet Department for Agitation and Propaganda which was responsible for explaining communist ideology and convincing the people to support the party. This agitprop took a number of forms — posters, press, radio, film, social networks — all in the service of spreading the message of the revolution, in the service of shaping public beliefs, in the service of directing the country towards a particular future. I think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."

Arguably, one of the most powerful, most well-known stories of the future of teaching and learning looks like this:

Now, you can talk about the popularity of TED Talks all you want — how the ideas of Sal Khan and Sugata Mitra and Ken Robinson have been spread to change the way people imagine education — but millions more people have watched Keanu Reeves, I promise you. This — The Matrix— has been a much more central part of our ed-tech imaginary than any book or article published by the popular or academic press. (One of the things you might do is consider what other stories you know — movies, books — that have shaped our imaginations when it comes to education.)

The science fiction of The Matrix creeps into presentations that claim to offer science fact. It creeps into promises about instantaneous learning, facilitated by alleged breakthroughs in brain science. It creeps into TED Talks, of course. Take Nicholas Negroponte, for example, the co-founder of the MIT Media Lab who in his 2014 TED Talk predicted that in 30 years time (that is, 24 years from now), you will swallow a pill and "know English," swallow a pill and "know Shakespeare."

What makes these stories appealing or even believable to some people? It's not science. It's "special effects." And The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.

We should ask, what does it mean in these stories -- in both the Wachowskis' and Negroponte's -- to "know"? To know Kung Fu or English or Shakespeare? It seems to me, at least, that knowing and knowledge here are decontextualized, cheapened. This is an hollowed-out epistemology, an epistemic poverty in which human experience and human culture and human bodies are not valued. But this epistemology informs and is informed by the ed-tech imaginary.

"What if, thanks to AI, you could learn Chinese in a weekend?" an ed-tech startup founder once asked me — a provocation that was meant to both condemn the drawbacks of traditional language learning classroom and prompt me, I suppose, to imagine the exciting possibilities of an almost-instanteous fluency in a foreign language. And rather than laugh in his face — which, I confess that I did — and say "that's not possible, dude," the better response would probably have been something like: "What if we addressed some of our long-standing biases about language in this country and stopped stigmatizing people who do not speak English? What if we treated students who speak another language at home as talented, not deficient?" Don't give me an app. Address structural racism. Don't fund startups. Fund public education.

This comic appeared in newspapers nationwide in 1958 — the same year that psychologist B. F. Skinner published his first article in Science on education technology. You can guess which one more Americans read.

Push-button education. Tomorrow's schools will be more crowded; teachers will be correspondingly fewer. Plans for a push-button school have already been proposed by Dr. Simon Ramo, science faculty member at California Institute of Technology. Teaching would be by means of sound movies and mechanical tabulating machines. Pupils would record attendance and answer questions by pushing buttons. Special machines would be "geared" for each individual student so he could advance as rapidly as his abilities warranted. Progress records, also kept by machines, would be periodically reviewed by skilled teachers, and personal help would be available when necessary.

The comic is based on an essay by Simon Ramo titled "The New Technique in Education" in which he describes at some length a world in which students' education is largely automated and teachers are replaced with "learning engineers" — a phrase that has become popular again in certain ed-tech reform circles. This essay and the comic, I'd argue, helped establish an ed-tech imaginary that is familiar to us still today. Push-button education is "personalized learning." Personalized learning is push-button education.

(Ramo is better known in other circles as "the father of the intercontinental ballistic missile," incidentally.)

Another example of the ed-tech imaginary from post-war America: The Jetsons. The Hanna-Barbera cartoon — a depiction of the future of the American dream — appeared on prime-time television during the height of the teaching machine craze in the 1960s. Mrs. Brainmocker, young Elroy Jetson's robot teacher (who, one must presume by her title, was a married robot teacher), appeared in just one episode — the very last one of the show's original run in 1963.

Mrs. Brainmocker was, of course, more sophisticated than the teaching machines that were peddled to schools and to families at the time. The latter couldn't talk. They couldn't roll around the classroom and hand out report cards. Nevertheless, Mrs. Brainmocker’s teaching — her functionality as a teaching machine, that is — is strikingly similar to the devices that were available to the public. Mrs. Brainmocker even looks a bit like Norman Crowder's AutoTutor, a machine released by U.S. Industries in 1960, which had a series of buttons on its front that the student would click on to input her answers and which dispensed a paper read-out from its top containing her score. An updated version of the AutoTutor was displayed at the World's Fair in 1964, one year after The Jetsons episode aired.

Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons— was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention. (In this case, that would be school bully Kenny Countdown, who Mrs. Brainmocker caught watching the one-millionth episode of The Flintstones on his TV watch.)

Not particularly radical or transformative in terms of pedagogy and yet utterly exclusionary in terms of politics. This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons— no Black people living the American dream of the mid-twenty-first century.

To borrow from artist Alisha Wormsley, "there are Black people in the future." Pay attention when an imaginary posits otherwise. To decolonize the curriculum, we must also decolonize the ed-tech imaginary.

There are other stories, other science fictions that have resonated with powerful people in education circles. Mark Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.

There are other stories, of course.

The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version. Shelley's novel underscores the dangerous consequences of scientific knowledge, sure, but it also explores how knowledge that is gained surreptitiously or gained without guidance might be disastrous. Victor Frankenstein, stumbling across the alchemists and then having their work dismissed outright by his father, stoking his curiosity so much that a formal (liberal arts?) education can't change his mind. And the creature, abandoned by Frankenstein and thus without care or schooling, learning to speak by watching the De Lacey family, learning to read by watching Safie, "the lovely Arabian," do the same, finding and reading Paradise Lost.

"Remember that I am thy creature," the creature says when he confronts Frankenstein, "I ought to be thy Adam; but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good — misery made me a fiend." Misery and, perhaps, reading Milton.

I've recently finished writing a book, as some of you know, on teaching machines. It's a history of the devices built in the mid-twentieth century — before computers — that psychologists like B. F. Skinner believed could be used to train children (much as he trained pigeons) through operant conditioning. Teaching machines would, in the language of the time, "individualize" education. It's a book about machines, and it's a book about Skinner, and it's a book about the ed-tech imaginary.

B. F. Skinner was, I'd argue, one of the best known public intellectuals of the twentieth century. His name was in the newspapers for his experimental work. His writing was published in academic journals as well as the popular press. He was on television, on the cover of magazines, on bestseller lists.

Incidentally, here's how Ayn Rand described Skinner's infamous 1971 book Beyond Freedom and Dignity, a book in which he argued that freedom was an illusion, a psychological "escape route" that convinced people their behaviors were not controlled or controllable:

"The book itself is like Boris Karloff's embodiment of Frankenstein's monster,” Rand wrote, "a corpse patched with nuts, bolts and screws from the junkyard of philosophy (Pragmatism, Social Darwinism, Positivism, Linguistic Analysis, with some nails by Hume, threads by Russell, and glue by the New York Post). The book's voice, like Karloff's, is an emission of inarticulate, moaning growls — directed at a special enemy: 'Autonomous Man.'"

Note: I only cite Ayn Rand here because of the Frankenstein reference. It's also a reminder that the enemy of your enemy need not be your friend. And it's always worth pointing how much of the Silicon Valley imaginary — ed-tech or otherwise — is very much a Randian fantasy of libertarianism and personalization.

Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.

In 1971, the same year that Skinner's Beyond Freedom and Dignity was published, Stanley Kubrick released his film A Clockwork Orange. And I contend that the movie did much more damage to Skinner's reputation than any negative book review.

To be fair, the film, based on Anthony Burgess's 1963 novel, did not depict operant conditioning. Skinner had always argued that positive behavioral reinforcement was far more effective than conduct aversion therapy — than the fictional "Ludovic Technique" that A Clockwork Orange portrays.

A couple of years after the release of the film, Anthony Burgess wrote an essay (unpublished) reflecting on his novel and the work of Skinner. Burgess made it very clear that he opposed the kinds of conditioning that Skinner advocated — even if, as Skinner insisted, behavioral controls and social engineering could make the world a better place. "It would seem," Burgess concluded, "that enforced conditioning of a mind, however good the social intention, has to be evil." Evil.

Many people who tell the story of ed-tech say that Skinner's teaching machines largely failed because computers came along. But what if what led to the widespread rejection of teaching machines — for a short time, at least — was in part that the "ed-tech imaginary" shifted and we recognized the dystopia, the inhumanity, the carcerality in behavioral engineering and individualization? The imaginary shifted, and politics shifted. A Senate subcommittee investigated behavior modification methods in 1974, for example.

How do we shift the imaginary again? And just as importantly, of course, how do we shift the reality? How do we design and adopt ed-tech that does not harm users?

First, I think, we must recognize that ed-tech does do harm. And then, we must realize that there are alternatives. And there are different stories we can turn to outside those that Silicon Valley and Hollywood have given us for inspiration.

I'll close here with one more story — not a piece of the ed-tech imaginary per se, but one that maybe could be, something that points towards possibility, something that might help us tell stories and enact practices that are less carceral, more liberatory. In her essay "The Carrier Bag Theory of Fiction," science fiction writer Ursula K. Le Guin offers some insight into the (Capital-H) Hero and (Capital-A) Action that has long dominated the stories we have told about Western civilization, its history and its future. This is our mythology. She refers to that famous scene in another Stanley Kubrick film, 2001: A Space Odyssey, in which a bone is used to murder an ape and then gets thrown into the sky where it becomes a space ship. Weapons and Heroes and Action. "I'm not telling that story," she says. Instead of the bone or the spear, she's interested in a different tool from human evolution: the bag — something to carry or store grain in, something to carry a child in, something that sustains the community, something where you put precious items that you will want to take out later and study.

That's the novel, Le Guin says — the novel is the carrier bag theory of fiction. And as the Hero with his pointy sticks tends to rather silly in that bag, she argues, we've developed different characters instead to fill our novels. But the genre of science fiction, to the contrary, has largely embraced that older Hero narrative.

If science fiction is the mythology of modern technology, then its myth is tragic. "Technology," or "modern science" (using the words as they are usually used, in an unexamined shorthand standing for the "hard" sciences and high technology founded upon continuous economic growth), is a heroic undertaking, Herculean, Promethean, conceived as triumph, hence ultimately as tragedy. The fiction embodying this myth will be, and has been, triumphant (Man conquers earth, space, aliens, death, the future, etc.) and tragic (apocalypse, holocaust, then or now).

I'd say that this applies to science and technology as fields not just as fictions. Think of Elon Musk shooting his sports car into space.

As we imagine a different path forward for teaching and learning, perhaps we can devise a carrier bag theory of ed-tech, if you will. Indeed, as I hope I've shown you this morning, so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest. Knowledge as a bag that sustains a community, not as a cudgel. Imagine that.

Building Anti-Surveillance Ed-Tech

$
0
0

These are the slides and transcript from my conversation this morning with Paul Prinsloo — a webinar sponsored by Contact North

Pardon me if I just rant a little. Pardon my language. Pardon my anger and my grief. Or don’t. Let us sit with our anger and our grief a little.

We are living in terrible, terrible times — a global pandemic, economic inequality exacerbated by economic depression, dramatic and worsening climate change, rampant police violence, and creeping fascism and ethno-nationalism. And in the midst of all this danger and uncertainty, we have to navigate both old institutions and practices — may of which are faltering under a regime of austerity and anti-expertise — and new(ish) technology corporations — many of which are more than happy to work with authoritarians and libertarians.

Education technology — as a field, an ideology — sits right at that overlap but appears to be mostly unwilling to recognize its role in the devastation. It prefers to be heralded as a savior. Too many of its advocates refuse to truly examine the ways in which ed-tech makes things worse or admit that the utopia they've long peddled has become a hellscape of exploitation and control for a great deal of the people laboring in, with, under its systems.

Ed-tech may not be the solution; in fact, ed-tech may be the problem — or at the very least, a symptom of such.

Back in February — phew, remember February? — Jeffrey Moro, a PhD candidate in English at the University of Maryland, wrote a very astute blog post "Against Cop Shit" in the classroom.

"For the purposes of this post," Moro wrote, "I define 'cop shit' as 'any pedagogical technique or technology that presumes an adversarial relationship between students and teachers.' Here are some examples:

  • ed-tech that tracks our students' every move
  • plagiarism detection software
  • militant tardy or absence policies, particularly ones that involve embarrassing our students, e.g. locking them out of the classroom after class has begun
  • assignments that require copying out honor code statements
  • 'rigor,' 'grit,' and 'discipline'
  • any interface with actual cops, such as reporting students' immigration status to ICE and calling cops on students sitting in classrooms.

The title of this webinar is "Building Anti-Surveillance Ed-Tech," but that's a bit of a misnomer as I'm less interested in either "buiding" or in "ed-tech." Before we build, we need to dismantle the surveillance ed-tech that already permeates our schools. And we need to dismantle the surveillance culture that it's emerged from. I think this is one of our most important challenges in the months and years ahead. We must abolish "cop shit," recognizing that almost all of ed-tech is precisely that.

I know that that makes people bristle, particularly if your job is administering the "cop shit" or if you are compelled by those with more authority at work to use "cop shit" or if you believe that "cop shit" is necessary because how else do we keep everyone safe.

Why do we have so much "cop shit" in our classrooms, Moro asks. "One provisional answer is that the people who sell cop shit are very good at selling cop shit," he writes, "whether that cop shit takes the form of a learning management system or a new pedagogical technique. Like any product, cop shit claims to solve a problem. We might express that problem like this: the work of managing a classroom, at all its levels, is increasingly complex and fraught, full of poorly defined standards, distractions to our students' attentions, and new opportunities for grift. Cop shit, so cop shit argues, solves these problems by bringing order to the classroom. Cop shit defines parameters. Cop shit ensures compliance. Cop shit gives students and teachers alike instant feedback in the form of legible metrics."

I don't think that ed-tech created "cop shit" in the classroom or created a culture of surveillance in schools by any means. But it has facilitated it. It has streamlined it. It has polished it and handed out badges for those who comply with it and handed out ClassDojo demerits for those who haven't.

People who work in ed-tech and with ed-tech have to take responsibility for this, and not just shrug and say it's inevitable or it's progress or school sucked already and it's not our fault. We have to take responsibility because we are facing a number of crises — some old and some new — that are going to require us to rethink how and why we monitor and control teachers and students. And now, the “cop shit" that schools are being sold isn't just mobile apps that track whether you've completed your homework on time. It's body temperature scanners. Contact tracing. Movement tracking. Immigration status. Political affiliation.

Surveillance practices pre-date digital technologies — of course they do. I pulled my copy of Michel Foucault's Discipline and Punish off the shelf to re-read as I prepared for this talk (and for my next book project, which will be on the history of surveilling children — someday, I'll regale you all with the story of how the baby monitor was invented and reinvented to respond to moral panics of the day), and roll your eyes all you want at the invocation of poststructuralism and the Panopticon. But this is where we reside.

Surveillance in schools reflects the values that schools have (unfortunately) prioritized: control, compulsion, distrust, efficiency. Surveillance is necessary, or so we've been told, because students cheat, because students lie, because students fight, because students disobey, because students struggle. Much of the physical classroom layout, for example, is meant to heighten surveillance and diminish cheating opportunities: the teacher in a supervisory stance at the front of the class, wandering up and down the rows of desks and peering over the shoulders of students. (It's easier, I should note, to shift the chairs in your classroom around than it is to shift the code in your webinar software.) And all of this surveillance, we know, plays out very differently for different students in different schools — which schools require schools to walk through metal detectors, which schools call the police for disciplinary infractions, which schools track what students do online, even when they're at home. And nowadays, especially when they're at home.

Of course, educators — teachers and staff — are at home now too. (Or my god, I hope they are.) And the surveillance technology that's been wielded against students will surely be used against them as well.

We can already see some of this at place outside of educational institutions in the new, workplace surveillance tools that many companies are adopting. For a very long time, the argument that many employers made against working from home was that they didn't trust their employees to be productive. The supervisor needed to be able to walk by your desk at any moment and make sure you were "gonna have those TPS reports to us by this afternoon," to borrow a phrase from the movie Office Space. Companies are now installing software on employees' computers to track where they are, for how long, doing what. Much as education technology is designed on the basis of distrust of students, enterprise technology — that is, technology sold to large businesses — is designed around a distrust of workers. Again, there's a long history here — one that isn't just about computing. The punch clock, for example, was invented in 1888 by a jeweler William LeGrand Bundy in order to keep track of what time his employees came and left work. He and his brother founded the Bundy Manufacturing Company to manufacture the devices, and after a series of mergers, it became a part of a little company called International Business Machines — one we know better as IBM. Those "business machines" were sold with the promise of more efficient workplaces, of course, and that meant monitoring workers. And that included the work teachers and students do at school.

Zoom, this lovely piece of videoconferencing software we are using right now, is an example of enterprise technology. Zoom never intended to serve the education market, despite its widespread adoption since "work-from-home" began earlier this year. And there is quite a bit about the functionality of the Zoom software that reveals whose interests it serves — the ability to track who's paying attention, for example, and who's actually working on something else in a different application (a feature, I will say, that the company disabled earlier after complaints about its fairly abysmal security and privacy practices).

Who's cheating the time-clock, right? Who's cheating the boss. What are workers doing? What are workers saying? Enterprise software and ed-tech software — both "cop shit" — claim they can inform the management — the principal, the provost. This software claims it knows what we're up to, and if it can't stop us from misbehaving, it can narc us out.

What it's been coded to identify as "misbehavior" is fairly significant. Early in June, if you'll recall, at the bequest of Beijing, Zoom disabled the accounts of Chinese dissidents who were planning on commemorating Tiananmen Square protests — something that should give us great pause when it comes to academic freedom on a platform that so many schools have adopted.

Digital technology companies like to say that they're increasingly handing over decision-making to algorithms — it's not that Beijing made us do it; the algorithm did. Recall Facebook CEO Mark Zuckerberg testifying before Congress, insisting that AI would prevent abuse and disinformation. But Facebook does not rely on AI; content moderation is still performed by people — it's terrible, traumatizing, low-pay work.

Ah, the sleight of hand when it comes to the promises of automation. Recall the mechanical Turk, for example, an eighteenth-century machine that purported to be an automated chess-player that was actually operated by a human hidden inside.

Automation is, nonetheless, the promise of surveillance ed-tech — that is, the automation of the work of disciplining, monitoring, grading. We've seen, particularly with the switch to online learning, a push for more proctoring "solutions" that gather immense amounts of data to ascertain whether or not a student is cheating. Proctoring software is some of the most outrageous "cop shit" in schools right now.

These tools gather and analyze far more data than just a student's responses on an exam. They require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home. Some ask for a tour of the student's room to make sure there aren't "suspicious items" on the walls or nearby.

The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example. The algorithm — sometimes in concert with a human proctor — determines who is a cheat. But more chilling, I think, the algorithm decides who suspicious, what is suspicious.

We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.

We do know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this, as do technology companies — and this is really important. We can see how the mistrust and loathing of students is part of a proctoring company culture and gets baked into a proctoring company's software when, for example, the CEO posts copies of a student's chat logs with customer service onto Reddit, as the head of Proctorio did last month.

That, my friends, is some serious "cop shit." Cops have no business in schools. And frankly, neither does Proctorio.

So, if we are to build anti-surveillance ed-tech, we have much to unwind within the culture and the practices of schools — so much unwinding and dismantling before we even start building.

Indeed, I will close by saying that — as with so much in ed-tech — the actual tech itself may be a distraction from the conversation we should have about what we actually want teaching and learning to look like. We have to chance the culture of schools not just adopt kinder ed-tech. Chances are, if you want to focus on the tech because it's tech, you're selling "cop shit."

'Luddite Sensibilities' and the Future of Education

$
0
0

This is the transcript of my keynote at the Digital Pedagogy Lab this morning. Except not really. It was a "flipped" keynote, so this is more like the pre-reading for what I actually talked about. Sort of.

I have really struggled to prepare a keynote for you all. This isn't the first talk I've done since my son died. Hell, it's not even the second or third. But this one has been the hardest to write, in part because I am so close with so many of you in the DigPed community, and what I want right now is to be with you, in person, to cry with you and laugh with you and rage with you and scheme with you.

I know that we're all struggling to muddle through this crisis — or these crises, I should say: the pandemic, economic precarity, school re-opening, police violence, creeping authoritarianism. So much loss. So much death. And I know that it's probably for the best that we use digital technologies in lieu of gathering face-to-face — for school or for work or for professional development or for socializing. Plenty of folks insist that these digital tools can be used well for teaching and learning, that online education doesn't have to be inferior to offline, that offline can be pretty wretched already. (Indeed, that's likely why you all are here: to work on bettering your digital pedagogical practices with an eye to equity and justice.)

But I remain steadfast in my criticism of education technologies in almost all their forms and functions. Indeed, the problems that we've long identified with ed-tech — privacy violations, security concerns, racist algorithms, accessibility and access issues, all-male leadership teams, outsourcing, disruptive bullshittery, and so on — are still here. And I fear we are at a particularly dangerous crossroads for education because of ed-tech. The danger is not simply because of the entrepreneurial and the venture capitalist sharks circling our institutions, but also because the narratives, long foisted upon us, about the necessity of ed-tech are becoming more and more entrenched, more and more pervasive. These narratives have always tended to repress the trauma and anxiety associated with the adoption of new technologies and more broadly with the conditions, the precarity, of everyday life. These narratives want us to forget that ed-tech is, first and foremost, beholden to the ideologies of machines, efficiencies, and capitalism.

But this is ed-tech's big moment, or so we're told. And all those folks who predicted a decade or so ago that schools would all be online by 2020, that universities would all be bankrupt may just be right.

So, that's the other struggle I've had trying to prepare this keynote. What — right now — do I have to offer as a speaker? I mean, who really needs to hear me be Cassandra when we can all see the impending doom? Who wants to hear me criticize the LMS or the video-conferencing software when we're stuck with it.

But that's the message of this talk, I suppose: we're not.

We're not stuck. We don't have to surrender. We can refuse, and we should. And we should support students when they refuse. These can be little refusals — small acts of resistance, obfuscations, that come from positions of little to no power. These can be loud refusals in support of those with little to no power. We can push back, and we can demand better. It didn't have to be like this (imagine me gesturing widely), and it doesn't have to be like this (imagine me pointing at the screen).

In 1984, novelist Thomas Pynchon wrote an article asking "Is It O.K. to Be a Luddite?" Regardless of what Pynchon thinks, I'm here to tell you that yes, yes it is. Luddites are imagined as the "counterrevolutionaries" of the Industrial Revolution and as such the enemies of science and technology. The name "Luddite" is used as a pejorative to dismiss anyone who frowns at technology, anyone who's perceived to be clinging to tradition over "progress." But the Luddites have been unfairly maligned, I'd say, as this group of late 17th / early 18th century English textile workers — skilled, technical workers — were not opposed to machines despite their famed machine-smashing. What they opposed were the exploitative practices of the textile factory owners — that is, the emerging practices of capitalism. The Luddites' tactic of what historian Eric Hobsbawm called "collective bargaining by riot" was used by workers in other industries as well.

I'm rather fond of Pynchon's essay because it weaves together several strands about Luddism — that Ned Ludd first smashed a stocking frame in Leicestershire in 1779; that Lord Byron was one of the only members of Parliament who opposed legislation in 1812 that would make the smashing of loom machines punishable by death; that Byron wrote "the Song of the Luddites" in 1816 — a "hymn so inflammatory that it wasn't published till after the poet's death" (part of which, incidentally, I have tattooed on my right forearm — "Down with all kings but King Ludd"); that the previous summer, Byron had spend with his friends Percy Shelley and Mary Shelley at a chateau on Lake Geneva where the latter wrote arguably the greatest science fiction novel, Frankenstein. Pynchon calls Frankenstein a Luddite novel, and I think it's fair to call it an ed-tech novel too, because it isn't simply about questions of science and ethics, but about education — or rather the mis-education of both Victor and the creature.

Pynchon does not mention one of the other interesting bits of Byron trivia — and trust me, there are many: his daughter was Ada Lovelace, the mathematician who helped Charles Babbage write algorithms for his proposed Analytical Engine, a general purpose computational machine. Ada Lovelace is often called the first computer programmer. Nor does Pynchon mention that we can trace the history of computing back not only to Lovelace but to the Jacquard machine, an automated loom device invented in 1804 that was run via punchcards — each card corresponding to the pattern to be woven. These would have been nice details for Pynchon to include, I'd argue, as at the end of his essay he speculates that the "Luddite sensibility" would struggle in the dawning Computer Age, that few Luddites would be willing to smash these new machines as they'd believe computers would help us to "cure cancer, save ourselves from nuclear extinction, grow food for everybody, detoxify the results of industrial greed gone berserk." Rather than a machine of exploitation, Pynchon argued, the computer would become a tool for revolution, one that, with its immense data-analyzing capabilities, would finally overturn the "permanent power establishment of admirals, generals and corporate CEO's."

Ha.

Unlike Pynchon, I do believe the "Luddite sensibility" survives, although not necessarily, as the insult suggests, in those who drag their feet in adopting the latest gadget. And this seems to be a perfect time to cultivate it further. Recall, the Luddites emerged in the economic devastation of the Napoleonic Wars — they wanted jobs, yes, but they wanted freedom and dignity. As we face economic devastation today, we need some solidarity and perhaps even a little sabotage. We can look at ed-tech as something to smash knowing that what we aim for are the systems of violence, exploitation, neoliberalism, mechanization, and standardization ed-tech that demands.

This requires more than a Luddite sensibility. It requires a Luddite strategy. And for us, I'd say, it is time for a Luddite pedagogy.

A Luddite pedagogy is not about making everyone put away their laptops during class — remember those days? Again, Luddism is not about the machines per se; it's about machines in the hands of capitalists and tyrants — in the case of ed-tech, that's both the corporations and the State, especially ICE and the police. Machines in the hands of a data-driven school administration. Luddism is about a furious demand for justice, about the rights of workers to good working conditions, adequate remuneration, and the possibility of a better tomorrow — and let's include students in our definition of "worker" here as we do call it "school work" after all.

A Luddite pedagogy is about agency and urgency and freedom. "A Luddite pedagogy is a pedagogy of liberation," Torn Halves writes in Hybrid Pedagogy, "and, as such, it clashes head on with the talk of liberation peddled by advocates of ed-tech. According to the latter, the child, previously condemned to all the unbearably oppressive restrictions of having to learn in groups, can now be liberated by the tech that makes a 1:1 model of education feasible, launching each and every child on an utterly personal learning journey. Liberation as personalization — here the Luddite finds something that ought to be smashed." A Luddite pedagogy doesn't sneer when people balk at new technologies; it doesn't assume they won't use them because they're incompetent; it finds strength in non-compliance.

A Luddite pedagogy is a pedagogy of subversion and transgression. It is a pedagogy of disobedience and dismantling. It is a pedagogy of refusal and of care. It is — with a nod to Jesse's opening keynote— against models and against frameworks (quite literally, Luddites smash frames). It is wildly undisciplined.

Let us be Luddites, not pigeons.

(You can read what I actually talked about in my keynote here.)

Pigeon Pedagogy

$
0
0

These were my remarks today during my "flipped" keynote at DigPed. You can read the transcript of my keynote here.

We haven't had a dog in well over a decade. Kin and I travel so much that it just seemed cruel. But now, what with the work-from-home orders and no travel til there's a vaccine (and even perhaps, beyond that), we decided to get one.

It's actually quite challenging to adopt a dog right now, as everyone seems to be of the same mind as us. And even before the pandemic, there's a big of a dog shortage in the US. Spay-and-neuter programs have been quite effective, and many states have passed laws outlawing puppy mills. The West Coast generally imports dogs from other parts of the country, but these rescue-relocations have largely been shut down. The shelters are pretty empty.

It's a great time to be a dog.

Adopting a dog is quite competitive, and we have been on multiple waiting lists. But finally, we lucked out, and last week we adopted Poppy. She is a 9 month old Rottie mix. She weighs about 55 pounds. She is not housebroken yet — but we're getting there. She's very sweet and super smart and is already getting better on the leash, at sitting when in the apartment elevator, at sitting at street corners, at sitting when people and other dogs approach her. It's important, I think, if you have a big dog, that you train them well.

If you have a dog, you probably know that the best way to train it is through positive behavior reinforcement. That is, rather than punishing the dog when she misbehaves, the dog should be rewarded when she exhibits the desired behavior. This is the basis of operant conditioning, as formulated by the infamous psychologist B. F. Skinner.

The irony, of course. I've just finished a book on the history of teaching machines — a book that argues that Skinner's work is fundamental to history, to how ed-tech is still built today. Ed-tech is operant conditioning, and we should do everything to resist it, and now I'm going to wield it to shape my dog's behavior.

Some background for those who don't know: As part of his graduate work, Skinner invented what's now known as "the Skinner Box." This "operant conditioning chamber" was used to study and to train animals to perform certain tasks. For Skinner, most famously, these animals were pigeons. Do the task correctly; get a reward (namely food).

Skinner was hardly the first to use animals in psychological experiments that sought to understand how the learning process works. Several decades earlier, for his dissertation research, the psychologist Edward Thorndike had built a "puzzle box" in which an animal had to push a lever in order to open a door and escape (again, often rewarded with food for successfully completing the "puzzle"). Thorndike measured how quickly animals figured out how to get out of the box after being placed in it again and again and again -- their "learning curve."

We have in the puzzle box and in the Skinner Box the origins of education technology — some of the very earliest "teaching machines" — just as we have in the work of Thorndike and Skinner, the foundations of educational psychology and, as Ellen Condliffe Lagemann has pronounced in her famous statement "Thorndike won and Dewey lost," of many of the educational practices we carry through to this day. (In addition to developing the puzzle box, Thorndike also developed prototypes for the multiple choice test.)

"Once we have arranged the particular type of consequence called a reinforcement," Skinner wrote in 1954 in "The Science of Learning and the Art of Teaching," "our techniques permit us to shape the behavior of an organism almost at will. It has become a routine exercise to demonstrate this in classes in elementary psychology by conditioning such an organism as a pigeon.”

"...Such an organism as a pigeon." We often speak of "lab rats" as shorthand for the animals used in scientific experiments. We use the phrase too to describe people who work in labs, who are completely absorbed in performing their tasks again and again and again.

In education and in education technology, students are also the subjects of experimentation and conditioning. Indeed, that is the point. In Skinner's framework, they are not "lab rats"; they are pigeons. As he wrote,

...Comparable results have been obtained with pigeons, rats, dogs, monkeys, human children… and psychotic subjects. In spite of great phylogenetic differences, all these organisms show amazingly similar properties of the learning process. It should be emphasized that this has been achieved by analyzing the effects of reinforcement and by designing techniques that manipulate reinforcement with considerable precision. Only in this way can the behavior of the individual be brought under such precise control.

Learning, according to Skinner and Thorndike, is about behavior, about reinforcing those behaviors that educators deem "correct" — knowledge, answers, not just sitting still and raising one's hand before speaking (a behavior I see is hard-coded into this interface). When educators fail to shape, reinforce, and control a student's behavior through these techniques and technologies, they are at risk, in Skinner's words, of "losing our pigeon."

In 1951, he wrote an article for Scientific American: "How to Train Animals." I pulled it out again to prepare for this talk today and realized that it contains almost all the tips and steps that dog trainers now advocate for. Get a clicker. Use that as the conditioned reinforcer. But then give the treats and associate the click with the reward. (The clicker is faster.) You can train a dog anything in less than twenty minutes, Skinner insisted. And once you're confident with that, you can train a pigeon. And then you can train a baby. And then…

Two years later after that article, Skinner came up with the idea for his teaching machine. Visiting his daughter's fourth grade classroom, he was struck by the inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials -- sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed through mechanization, and he built a prototype for his teaching machine which he demonstrated at a conference the following year.

Skinner believed that materials should be broken down into small chunks and organized in a logical fashion for students to move through. The machine would show one chunk, one frame at a time, and if the student answered the question correctly, could move on to the next question. Skinner called this process "programmed instruction." We call it "personalized learning today." And yes, this involves a lot of clicking.

Skinner is often credited with inventing the teaching machine. He didn't. Sidney Pressey, another educational psychologist, had built one decades beforehand. (Skinner said that Pressey's was more testing than teaching machine.) Despite who was or wasn't "the first," Skinner has shaped education technology immensely. Even though his theories have largely fallen out of favor in most education psychology circles, education technology (and technology more broadly) seems to have embraced them — often, I think, without acknowledging where these ideas came from. Our computer technologies are shot through with behaviorism. Badges. Notifications. Haptic alerts. Real-time feedback. Gamification. Click click click.

According to Skinner, when we fail to properly correct behavior — facilitated by and through machines — we are at risk of "losing our pigeons." But I'd contend that with this unexamined behaviorist bent of (ed-)tech, we actually find ourselves at risk of losing our humanity. To use operant conditioning, Skinner wrote in his article on animal training "we must build up some degree of level and again reinforces only louder deprivation or at least permit a deprivation to prevail which it is within our power to reduce." That is, behaviorial training relies on deprivation. Behaviorist ed-tech relies on suffering — suffering that we could eliminate were we not interested in exploiting it to reinforce compliance. This pigeon pedagogy stands in opposition to the Luddite pedagogy I wrote in the text for this keynote.

So, here's to our all being "lost pigeons," and unlearning our training. But dammit, here's to Poppy learning to be a very good and obedient dog.

Viewing all 874 articles
Browse latest View live




Latest Images