Technocratic Assumptions in Governing Digital Platforms
Earlier today, I listened to a talk at Harvard STS by Tim O’Reilly about his new book on digital platforms and antitrust and governing the digital economy and its many algorithms more broadly.
The talk was for me an interesting benchmark in how my thinking has evolved around digital technology governance over the past few years.
As I did, O’Reilly comes from the world of Silicon Valley, and is quite familiar with the details and logics of the digital systems under discussion. His focus especially was on large platforms like Facebook, Google, Amazon and so on.
He talked about how these systems (and their algorithms in particular) are designed primarily with particular “objective functions” in mind (clicks, watch time, whatever), and framed the problems of the digital tech world, then, as one of outdated or mis-targeted objective functions. If only we chose the right objective function, then the digital world might be more healthy, the talk seemed to suggest.
He also analogized this with the economy more broadly, noting that we similarly design economic policy “algorithms” (tax incentives etc.) to serve some objective function, which may increasingly be wrong, yielding increases in inequality etc. And so we must update the objectives to solve the problem rightly. O’Reilly invoked Al Roth and market design theory here, another pole of my intellectual influences.
In short, my read of O’Reilly’s take is that he does not take issue with the fundamental (largely technocratic) modes and mechanisms of governance – including the central role of data and algorithms – in these digital spaces. He does, however, feel that they ought to be refined, retargeted etc.
When I had just come out of the world of building digital technologies I had ideas that were in many ways consonant with Tim’s. I had been working in a role where I was myself a technocrat, concerned with the “right management” of a digital platform market. When I left that space for an academic path, intending to focus on broader questions in technology policy, I took with me many of the implicit governance assumptions that I’d internalized as a data scientists.
In short, I felt that governance of digital platforms through data (experiments etc.) was largely appropriate on some level, though perhaps with a broader set of more socially-oriented goals in mind. For example, in my first stint at the Berkman Klein Center, I can remember reading and being excited about so-called “Regulation 2.0” ideas (see, for example,1, and 2) along these lines1.
Throught my time in the tech world, I did have some serious seeds of anti-technocracy, though – in my prior work, I can remember having conversations with various colleagues about the lack of democratic input by platform participants into the various governance / design decisions that we were making, and so on.
Still, in listening to O’Reilly, I noticed how much my views have evolved. Here is the question that I typed out to put to Tim (though I did not submit it):
You talked about the importance of choosing / updating the right objective functions for defining the algorithm systems that govern much of the web. But isn’t it possible that the problem is not that we have chosen the wrong objective functions (or failed to update them), but more fundamentally that we have deferred so much of the governance of digital spaces (and economies!) to these “algorithmic” tools of relentless optimization, designed by technocrats (technologists, economists) vs. other modes of governance that e.g. embed more democratic oversight in system design?
Another thrust of concerns that I had fiddled with inserting here was around the implicit logics of “unintended consequences” in O’Reilly’s frame – i.e. the idea that various objective functions had been selected in a well-intended way, but that they became distorted e.g. by the narrow literalness of the computer’s understanding in ways that “we” did not expected. Unintended for who? As one of the commentators elegantly put it: who is this we, and what about those of us that may not feel included in it?
Contra my initial perspectives on these topics (more in line with O’Reilly), I notice here the seeds (or perhaps now saplings) of various intellectual influences that I have encountered since departing from the corporate tech world:
- Jasanoffian science and technology studies (STS)
- Law & political economy (LPE) work on antitrust in the digital world (especially by Lina Khan & K. Sabeel Rahman)
- Projects like J. Nathan Mathias’s Citizens & Technology Lab (previously CivilServant) and the platform cooperativism movement.
- Interaction with the blockchain world
While these influences may seem disparate, they intertwine for me in mutually questioning the implicit technocratic logics of corporate tech platform governance and, in different ways and different settings, injecting greater democratic input into the governance of digital spaces.
-
I haven’t revisited these ideas in a while, so I am not sure what I would think about them these days. It would be interesting to explore them again. ↩