Today I continue with a few further reflections on reading Lessig. Many of the resonances from my previous post echo forward in my continued reading.

First, STS links continue to jump out strongly: Lessig foregrounds the idea that values inform technological design (of the Internet in particular) which in turn inform outcomes and what is possible in the world. We should therefore resist understandings that treat the Internet as necessarily designed in some certain way or having some certain properties (e.g. of being “free” and “irregulable”). Though Lessig doesn’t use this word, I take this point as essentially a warning against technological determinism – i.e. the idea that technologies develop according to their own internal, socially-independent logics and then exert a unidirectional influence on society (i.e. tech –> society). Questioning technological determinism is a core thesis in much STS thinking, and Lessig is onboard (there’s a concise statement of this thesis in the first chapter of Jassanoff’s The Ethics of Invention, though also elsewhere).

Second, I am also struck further by my “first” and “second” resonances defined in my last post, especially as they relate to online “Identity” – a theme Lessig focuses on in my reading today. From various perspectives, I have been seeing more clearly the centrality of identity in digital space. Two key points from Lessig jump out to me on this theme:

  1. Identity is tightly related to regulability. What is not known or knowable by the state (or others) cannot be regulated. Identity also bears a relationship to (the possibility/efficiency of) economics and commerce, a theme that the economics of digitization literature strongly underscores.

  2. The Net (Lessig always calls it the Net) was designed in a way that made anonymity the default. Lessig cites the end-to-end principle and the minimalism of the TCP/IP protocol as a key driver here.

The key consequence of point (2) is that it means that any identification scheme online must be built on top of the Net, at the application layer. Point (2) in combination with (1) is part of what makes identification and privacy issues so central to the Internet and Internet policy.

Lessig goes on to outline the ways in which the Net could be (and has been) redesigned to diminish anonymity (i.e. make it easier to determine who is doing what and where online) and thereby be more regulable. In chapter 4 of Code 2.0, the emphasis is especially on the role of commerce as an independent driver of identification online (with consequent knock-on effects for regulation). He focuses mainly technological / architectural changes that have this character. For example, Lessig discusses the HTTP “cookie”, developed (in his telling) for the pedestrian purpose of e.g. allowing commerce websites to remember basic facts about “state” – i.e. did this browser already buy product X or not? But the second-order consequence of the cookie (and other related technologies) is to make identity broadly more traceable online.

Of course, Lessig is right that commerce has played a major role in making the Net more “identifiable” / less anonymous (for lack of a better term). This a key theme in the Tolentino essay I cited last time. However, interestingly, when compared with Lessig’s emphasis on commerce-driven infrastructural changes, it seems to me that a lot of the commerce-driven identification on the web has actually happened at the application level, and relies more on economic rather than technological forces to spur identification.

Facebook and other social media platforms, for example, have made the web more identifiable not (only) through technological tracking, but through convincing / coercing / compelling (depending your persuasion) users to opt into sharing more personal information in the form of Facebook profiles/posts/etc in exchange for a social product. Likewise, I think of platform-designed review systems. These systems likewise aid in “identity” in Lessig’s sense of creating clarity about who is doing what and when. Users are again willing to opt in via an understanding that they are a tool for building trust between strangers. Platform-designed identity-verification schemes (like Airbnb’s verified ID scheme) are the same if not more explicit.

Whatever you think about the use of “opt in” language here, that is beside the point. Whether user engagement in these systems is economically-voluntary or economically-compelled (e.g. because of monopolistic platform dominance), the point is that these are all economically-motivated systems that operate at the application level rather than technical/infrastructural level (as in Lessig’s examples).

That said, the distinction between degrees of “compulsion” is relevant in thinking about different modes of governance – a point that echoes in my blockchain conversations. Blockchain seems to be compelling to many folks in part because it has strict / inviolable technical properties that guarantee certain outcomes (e.g. your bitcoin cannot be inflated away except on the pre-determined schedule; no one can “steal” the bitcoin from your wallet unless they have your private key). Many of these outcomes are also ensured in non-blockchain based financial systems, albeit through alternative modes of (less “compulsive”?) governance. For example, macroeconomic and institutional incentives make it so that the US government has no interest in inflating away the value of your dollars. That would be bad for the country and the government. Likewise, your bank has a strong economic (and legal) incentives to protect your savings and not e.g. steal them. Obviously, if they did not do these things, no one would want to save money with them, and probably they would have legal issues. Thinking about preferences and differences across varying modes of governance is something that I want to do more of.