Systems of classification and normalization abound in digital platforms. Indeed, there is much recognizable of Foucault’s disciplinary mechanisms in e.g. Uber’s systems for including, excluding, promoting or demoting drivers, or in Facebook / Google / Youtube / Tik Tok’s choices about how to promote or rank content, ads and so on. As in Foucault’s description, platforms create (material) systems of “examination” that hierarchize individuals and – through their links to punishments (e.g. being removed from the platform) and rewards (e.g. having your content “promoted”) – produce conformity in behavior. As others have pointed out, such systems encode normative judgments about what types of behavior are or are not desirable. Platforms often directly or indirectly attempt to attribute these value judgments to other platform constituents – e.g. YouTube or Tik Tok is just ranking your content based on “how appealing it is to viewers”; Google is just ranking content based on “how relevant” it is to searchers – de-emphasizing the platform’s role in algorithmic curation processes & in defining normative concepts like “relevance”.

Further complication arises when platform systems of classification interact with academic research that invokes race and gender categories. For example, consider a recent study by Uber- and Stanford-affiliated economists studying the “Gender Earnings Gap in the Gig Economy”. Using proprietary data from Uber, the study reports a 7% premium in hourly wages for male vs. female drivers, but argues that this gap can be “entirely attributed to three factors: experience on the platform (learning-by-doing), preferences and constraints over where to work (driven largely by where drivers live and, to a lesser extent, safety), and preferences for driving speed”. Gender differences in driving speed are reported as a key factor explaining the difference.

There is much that could be said about this study; however, through the current lens, what jumps out is how Uber’s and the researchers’ construction of difference recedes into the background of the study, allowing difference in outcomes to be attributed to pre-existing differences in driver preferences that are discovered by the researchers. In the paper’s frame: faster driving simply “has labor market value” and is more “productive”, and therefore earns higher pay; men simply prefer to drive faster. Elided is Uber’s role in functionally rewarding speed over, say, safe driving (as highlighted by Cobbe quoted here), or the role of, say, an Uber feedback system (ratings) that could interact with gendered social expectations about driving speed1. Likewise, differences in preferences over safety are mentioned; but suppressed is the platform’s role in creating or addressing these differences.

More generally, the point is that the platform is the one creating the whole market here – including its rules and systems of hierarchy. The platform defines what constitutes “productive” work by drivers. The research paper backgrounds this fact, attributing consequences to neutral market forces. This benefits the platform since the platform cannot be held responsible if the gender wage gap is caused by exogenous difference in preferences and neutral market forces.

I note that my critique here parallels much of what Safiya Noble discusses in Algorithms of Oppression (which I also write about here). In my reading, a key thrust of Noble’s argument is about how Google poses itself (or is understood as) a “neutral” curator of information on the Internet, and how this makes it harder to hold the platform accountable when representational harms arise from its curation processes. My Uber example here extends this point, highlighting the role that academics/experts can play in stabilizing the understanding of platforms as “neutral”.

Footnotes

  1. This is a testable hypothesis for what it’s worth. But whether it is accurate or not is not really the point.