Recently, I have been reading and learning more about the Plurality Institute and their vision for what they call “plural technologies”, as outlined in their mission statement here and also in the emerging plurality book. One line that has been turning around in my head is this one (emphasis and parenthetical mine):

They (plural technologies) affirm that the purpose of technology is to amplify the power and value of human cognitive diversity, not resolve it into singularity.

I think that this line jumps out for me as someone that has long been thinking and wondering about quantiative technologies of knowledge creation – statistics, econometrics, “data”, experiments, and the like. What does it look like to work with these tools in a pluralistic, non-hierarchical, democratic sort of way? How might they be revised, either in substance or process to better align with these sorts of values?

The challenge is that, in some sense, these quantitative tools of knowledge creation are exactly about resolving complexity down into tractable (to the expert, to the state) singularities – constructing sharp categories of sameness and difference, taking averages or other “summary statistics”, and so on. Statistics as a field, of course, has its origins in public administration; and in many cases still the compression of statistical analysis is exactly for the sake of governance. For example, we can consider the centrality of the “average treatment effect” (ATE) in various policy and medical evaluations, which is then used to bludgeon all who might be opposed. Says the expert: “you cannot deny that my intervention works – just look at my RCT and my statistically-significant (average) treatment effect”.

So what is the path forward? A few perspectives come to mind. One has to do with some of the modes of digital public reason outlined in this post. For example, I think of the CAT Lab who do randomized experiments and “citizen science” in collaboration with digital communities. This is a project I have admired for many reasons for a long while. What makes it an example of “plural” social science in particular? I think some of the key features are its tight integration with–and elevation of the epistemic modes of–the communities that it is conducting research about. For example, in experiments with Reddit communities, policy interventions and outcome measures are co-designed with community moderators. Results are presented in public, and shared directly with affected communities in comprehensible terms, where they can be the subject of deliberation. There is a flattening of the hierarchy of expertise here and an integration with existing modes of quasi-democratic governance (i.e. community moderation) that I think resonates with the “plural technology” perspective.

Another reference that came to mind is this chapter from D’Agnazio & Klein’s Data Feminism. The chapter is explictly about “embracing pluralism” in data science work. Here is how the authors summarize what this means at the outset:

Embracing pluralism in data science means valuing many perspectives and voices and doing so at all stages of the process—from collection to cleaning to analysis to communication. It also means attending to the ways in which data science methods can inadvertently work to suppress those voices in the service of clarity, cleanliness, and control. Many of our received ideas about data science work against pluralistic meaning-making processes, and one goal of data feminism is to change that.

And in the conclusion of the chapter:

The fifth principle of data feminism is to embrace pluralism in the whole process of working with data, from collection to analysis to communication to decision-making. As we describe in this chapter, data work carries a high risk of enacting what Gayatri Spivak has termed epistemic violence, particularly when the people doing the work are strangers in the dataset, when they are one or more steps removed from the local context of the data, and when they view themselves (or are viewed by society) as unicorns, rock stars, and wizards. Embracing pluralism is a feminist strategy for mitigating this risk. It allows both time and space for a range of participants to contribute their knowledge to a data project and to do so at all stages of that project. In contrast to an underspecified data for good model, embracing pluralism offers a way to work toward a model of data for co-liberation. This means transferring knowledge from experts to communities and explicitly cultivating community solidarity in data work, as we see in the case of the Anti-Eviction Mapping Project. Moreover, embracing pluralism is not incompatible with “bigness” or scale; the EJ Atlas shows how pluralistic processes can be used to assemble a global archive and support empirical work in the service of justice. A single data scientist wizard will never defeat the matrix of domination alone, no matter how powerful their spells might be. But a well-designed, data-driven, participatory process, one that centers the standpoints of those most marginalized, empowers project participants, and builds new relationships across lines of social difference—well, that might just have a chance.

In short, the ideas here are very much resonant with the themes discussed above in relation to CAT Lab. Some key ideas are the avoidance of so-called “epistemic violence”, and the inclusion of varied perspectives at all stages of the analysis process.

An interesting prompt for further consideration could be: what other tools and technologies might be helpful in supporting this kind of plural social scientific work? Are there ways, for example, to make it easier for researchers to undertake research in this mode? Of course, CAT Lab has built some set of tools along these lines – e.g including infrastructure for collaborative experimentation especially on Reddit. What else could believers in plurality and data feminism build?