Gayle Rogers
6 min readJul 6, 2021

--

A Response to Inside Higher Ed’s Uber Ratings for Colleges

There have been a lot of bad takes on higher ed during the pandemic. And perhaps it’d be easiest simply to let a piece of clickbait titled “If Colleges Were Rated Like Uber Drivers…” (June 24, 2021) pass without comment. But in this case, we need to speak back. Enough is enough, and all due respect to Inside Higher Ed, I find it confounding that they published such a flimsy and degrading op-ed as this.

Brandon Busteed’s piece declares that a generation of students is ready to “fire” their colleges and universities for having failed them this year. He cites as his supporting data a recent survey, conducted in a joint venture between Inside Higher Ed and College Pulse, then “presented” by his employer, Kaplan. (You can access this survey yourself simply by giving over your personal information for “marketing purposes.”) In question 18 of this very survey, “As of today, are you planning to re-enroll for the Fall 2021 academic year?”, 88% of students surveyed say they are planning to return; only 5% are not. This high rate of projected retention (for most schools; we don’t know where these students are enrolled) betrays the fundamental premise of Busteed’s article. Game over. Right?

Not yet. The premise, more precisely, is that students now want to “fire” their colleges much like Uber does when its drivers’ ratings fall below 4.6 (which itself is not quite accurate, but that’s another matter). Even he admits that his analogy is stretched thin, and no statistician worth their salt would take his metrics seriously. (To call it “apples-to-apples,” as he does, is laughably inaccurate.) I take this to be a catchy premise in hopes of making a larger point: that colleges “failed” at delivering high-quality remote education, with “high-quality” defined in only one way — student ratings. If Busteed cannot see the many problems with this formulation, I invite him to spend more time studying best practices of pedagogical evaluation, of which student responses (not “ratings”) are only one component, and one that must be thoroughly contextualized.

So we need to ask exactly what the impulse of this attack might be, and why it emerges from a large-scale “Student Voice” survey administered by an industry publication and a data-driven marketing firm, then rolled out to faculty and university admins by a for-profit ed services company. Busteed’s culminating point is that the coming academic year “is no time to take the foot off the pedal of improving pedagogy and the overall quality of the educational experience for students.” Well: isn’t that what all good colleges and universities are doing, all the time? It’s largely why my colleagues and I show up for work. Most of the insights in the survey he cites reiterate this same point. But many surveys conducted internally at most every university this past year confirm the same overarching “discovery”: students who choose to attend college in-person… prefer an in-person college experience in every way. This should surprise no one. We didn’t even need surveys for this: anyone who was around students this year knew it well.

So what’s important, instead, is that Busteed and his beloved survey are entirely missing the big picture, and that this survey was conceived, designed, and executed very poorly. Students who choose in-person college experiences — the sample population here — have already valued in-person learning and implicitly devalued remote learning to some unmeasured degree. This is why the University of Phoenix did not suddenly put hundreds upon hundreds of in-person colleges and universities out of business with its disruptive model of online learning. Online schools and asynchronous learning have been thriving, yes — but so have in-person campuses, for good reason: because most students prefer them. If I ran a steakhouse, and a pandemic suddenly caused me to run out of meat, leaving me only able to serve salads for a long period of time, I should not expect my regular patrons to hand out five stars to me. Similarly, in-person students were not apt to rate remote instruction as a supreme “value” (in Busteed’s terms) no matter what bells and whistles any school brought out this year.

But perhaps most astoundingly, on the more local level, there is only one question of nineteen in the survey, that addresses professors’ actual classroom instruction: “My professors were flexible and accommodating if I needed more time to complete an assignment.” On this count, only 17% were rated “inflexible,” and the majority were rated “flexible”; but even here, deadline flexibility is very narrow part of the overall consideration of quality of remote instruction. A better-composed survey would have focused on how instructors — both faculty and graduate — handled remote learning holistically, as the course evaluations at my university and many others did. Think of the work that went into migrating courses to new learning management systems: where are the questions about that? Or the questions about faculty outreach to students who struggled with mental health, which skyrocketed this year, for instance? Instead, the survey gives us mostly rough numerical data points on generalities that anyone who works at a university already knows all too well. I could point to a dozen other potential problems with its formulation, but we don’t have access to its full underlying data.

This survey, in other words, has all the markings of being created and administered by people who are too far removed from actual classrooms to provide useful information to those who are in them. After all, when in-person experiences were made impossible for many students, and when the adaptations differed from how they began their college careers or how they envisioned college as high schoolers, it only stands to reason that they would “rate” them less than ideal. One would struggle to find a faculty member, either, who spent their career fantasizing careers about teaching remotely. But both students, faculty, and staff worked incredibly hard together, they collaborated in new environments, and they even found some innovations and new opportunities in remote learning spaces that will persist beyond the pandemic.

I am the chair of a large department (I see you, colleagues; thank you for all your hard work), and I spend a large part of my summer reading hundreds upon hundreds of student course evaluations. They are nothing like the simplistic Uber star ratings that Busteed cites, and for all the flaws and biases that we know inhere in them, they give a window onto some amazing work that took place during this past year. I can’t tell you how many students commented along the lines of “Zoom courses were tough, but Professor X was the only thing that saved this year for me.” I could produce dozens of examples easily. And on pure metrics — Busteed’s preferred value system — our faculty saw no dip in overall averages this year anyway. Amazing, right? What this means is that faculty did their jobs exceedingly well, and students understood the circumstances too. Were there frustrations, disappointments, and wishes for a different mode of learning? Of course. And we are professionals, experts in our craft who take enormous pride in our jobs and who care deeply about our students.

We take student feedback seriously (sometimes, true to stereotype, to the point that we can’t let it go), just as we do peer observations, technological experiments, our own trial and error, and much more. We want well-crafted and statistically valid surveys, we want to hear from students through a number of channels. We’ll engage all of that, learn, adapt, and grow. We want to know where we failed and where we can improve, but we need to hear it from valid sources, and we need it conveyed through valid channels. I imagine that Inside Higher Ed, College Pulse, and Kaplan would agree with such baselines, but they have failed here, and Busteed’s op-ed does them no favors. The idea that we need comparative Uber ratings to have a finger wagged at us from outside in order to remind us of how consequential it is that we keep striving to do our jobs well this fall? I give that notion and its cynical lack of respect for what we do one star.

Gayle Rogers

grogers@pitt.edu

Professor and Chair of English

University of Pittsburgh

--

--

Gayle Rogers

Gayle Rogers is the author of Speculation: A Cultural History from Aristotle to AI. He is professor and chair of English at the University of Pittsburgh.