In this post, I want to share some rambling reactions to K. Sabeel Rahman’s piece “The New Octopus” in Logic Magazine. I read the piece as a participant in the 2020 AMRI Academy, and found it thought-provoking and articulate (as is much of Rahman’s work). I don’t claim that any of the points or arguments offered here are particularly decisive – they are mainly just articulated ideas/questions/concerns that came up for me as I read the piece.

The main point I want to mull over in this post is the idea that excessive corporate power could be per se problematic, regardless of whether it is exercised. This point seems important – perhaps central – in Law & Political Economy (LPE) thinking more broadly, and especially in how it differs from more conventional economic approaches. Here is a key articulation of the idea from Rahman (Rahman is here characterizing earlier legal thinkers like Brandeis; emphasis mine):

But what’s notable about the intellectual ferment around scale is the degree to which these debates turned not on bigness, but rather on issues of power and contestability. How much and what kinds of power did these new corporate entities exercise? And how could these concentrations of power be minimized, contested, and held accountable?

Power, crucially, could manifest in “good” and “bad” forms. Just because a firm acted benevolently did not change the fact that it possessed power. The key, then, was not outcomes but potential: so long as the good effects of corporate power relied on the personal decisions of corporate leaders, rather than on institutionalized forms of accountability, it remained a threat.

In short, there is an idea here that the existence of corporate power per se could be a problem or “threat” worthy of regulatory response, regardless of whether or how that power is exerted, and regardless of the extent to which it comes with “bigness” in the sense of literal market concentration. On this framing, the key policy question, then, is how to create structures to ensure that such private power is (in Rahman’s framing) contestable, or accountable, or “in check”.

While this position is (I think) clear to me, I am not sure if I fully understand the argument for adopting it. If concentrated corporate power per se is a threat–even if it is not exerted—-what exactly is it a threat to, and why exactly should we worry about it? The one-word answer seems to be that this power is a threat to “liberty”, or perhaps “democracy”. But these are broad terms that are hard to pin down. How are we to know when corporate power has become sufficient to threaten something so big as liberty? And what exactly is corporate power?

In the article, Rahman taxonomizes key forms corporate power he sees in the modern technology sector, in particular naming: (1) transmission power, (2) gatekeeping power, and (3) scoring power. While I agree that some of the examples cited here are concerning, I am not sure if my reasons are the same as Rahman’s. And further, I am especially not sure if I see how they are examples of power that I understand as per se problematic (i.e. power that would be problematic even if it were not exerted).

For example, Rahman cites Amazon’s ability to use its data to selectively enter and compete with successful third-party sellers as an example of problematic “transmission power”. I can understand how one could make an argument that this sort of behavior might be concerning from a narrower economic perspective e.g. by appealing to potential effects on innovation, but this does not seem to be Rahman’s argument.

The innovation argument is something like the following: Amazon’s entry (or even potential entry) in downstream product markets could harm innovation incentives for sellers and reduce product quality or variety in the long run1. But if Amazon never exerted this power (i.e. never entered third-party markets), and further had no incentive to do so, would we really still be concerned? I am not sure. Would our answer change if Amazon were (even) larger and more powerful than it is today, and a (even) larger share of commerce happened online? Maybe. I do understand the intuition — it’s threatening to have an economy or society that is too heavily dependent on the benevolence of some single actor, even if that actor is presently well-behaved. We recognize this in government and create checks and balances. With the election of Trump, we see why this is important. So too might we like to have checks on even “well-behaved” corporate power.

Roughly, I think this fear of unchecked private power is the type of moral intuition that lies behind this perspective in general. While I do understand it, it is also a bit strange when you peel the layers back. Is power that is never exerted really a relevant form of power? What if there is some other regulatory force – like norms or economic forces – that discourages the use of the power? Is regulatory policy or state action the only (acceptable) way to “check” concentrated private power?

In the tech setting, I think my moral intuitions around this are more clearly aligned with Rahman when considering social platforms like YouTube or Facebook. In these cases, it is more obvious to me the way that these platforms pose a threat to democracy. For example, if Facebook wanted to swing an election, they almost certainly could through their control of the distribution of information and propaganda (or even nudges for certain types of people to get out and vote). Even if there are incentives for Facebook not to do this, it is still disturbing to have our democratic processes broadly hanging in the balance of what Mark Zuckerberg (or anyone really) thinks is best, without further checks.

Even short of deliberate election-throwing, the scale and power that private individuals at Facebook have over key questions about how the public at large will consume and perceive information–and which information providers (including key social institutions like journalism) will survive or not–is highly concerning to me. For example, I can remember once seeing a presentation from a Facebook data scientist, where he described some algorithmic changes they made to demote a certain type of content that was seen as not in the public’s best interest (so-called “borderline” content or something like that). He showed A/B test results from this change, which included evidence about how the changes would affect the pageviews of a wide range of media entities including the New York Times etc. There’s a lot I could say here; but, in short, I found it very disturbing that some random product team at Facebook would get to make a decision like this without oversight, despite so many broad social consequences. And for me, I think that that intuition applies regardless of whether that particular product team consistently made decisions I thought were in the public interest or not.

Anyhow, if you are someone reading this post who knows more about these topics than I do, feel free to reach out and explain your perspective to me (fossj117 AT gmail DOT com). I don’t claim to be an expert – just someone trying to understand.

Footnotes

  1. I am not sure if this is compelling even as a narrow economic argument; but that is not really relevant to the discussion here.