These are some open-ended thoughts on algorithmic curation on social content platforms.

Rise of The Algorithmic Feed1

“Following” or “subscribing” to someone on the internet used to mean something.

Back in the early days of Web 2.0 social media, feeds were not so algorithmically-curated as they are today. “Following” a creator on a platform like YouTube meant that you would basically see their posts in your feed.

For content consumers (readers, viewers), the “feed” was something closer to a chronological timeline of the posts or videos of the people you followed.

This world was good for creators – and especially established creators – in at least some ways. It meant that as a creator, you could build a stable community around your content. As you grew and gained more followers, you knew that there was a consistent audience who would consume – or at least have a chance to consume – the content that you shared. In economic terms, your following – along with the chronological feed – was a kind of “moat” that ensured consistent viewership and revenue, and all the psychological and financial security – and creative freedom – that came with that.

However, this world began to change somewhere in the early 2010s (or perhaps even earlier). Content platforms generally make money from advertising to content consumers. Because of this, they care first and foremost about engaging content consumers – keeping them on the platform, getting them to consume more, and so on.

As machine learning technology improved, platforms got better at predicting which specific pieces of content would “engage” consumers. This meant that they could increase engagement by moving away from the chronological feed toward something more curated, rooted in algorithmic prediction.

Over time, this became integrated into platform product design. While many platforms retained a “Subscriptions” or “Following” feed to some extent (of exclusively content from creators you followed), these feeds were de-emphasized – relegated to sidebars or otherwise hidden away. Algorithmically-curated “For You” or “Recommended” feeds became more central, often promoted to “default” status. And users spent more time on them – after all, the machine learning “worked”. It really could predict what viewers would spend more time engaging with. The current state of the world is reflected in both YouTube and Tik Tok, where algorithmic feeds are central.

Consequences of The Algorithmic Feed

The rise of the algorithmic feed has had a wide range of consequences.

One consequence is that the act of “following” a creator began to mean less. Platforms began to treat “following” as a signal of a viewer’s interests, rather than a dispositive mandate about what type of content to deliver to them.

For creators – and especially established creators – the shift has yielded mostly difficulties. The weakened “following” relationship means that the possibility of building a stable audience – and the reliable income that comes with it – is reduced.

From the perspective of an economist, what has happened is that user-generated content market has become “more competitive” in a sense. In the algorithmically-curated world, every piece of content a creator produces must compete for every view with every other piece of content that’s been created by anyone anywhere on the platform. The “moat” of a body of followers who will surely be served your content is gone, and there is more uncertainty about the success that any particular piece of content will achieve.

It’s not all bad for creators. The main segment of creators that benefits from the algorithmic feed is new creators – those who don’t yet have a following. In the algorithmic world, if you make “engaging” content (or content that the algorithm thinks will be engaging), you will be delivered views, regardless of whether you have a big following already. Relatedly, viewers are more willing to “follow” creators, since the following relation means less for what their feed looks like; in this sense, it is easier for creators to “grow” a following on Tik Tok, for example, though this following means less than it once did (per above).

What about the effect on content consumers? Competition is supposed to be good for consumers, right? And the algorithmic curation is being done in the interest of engaging them, right?

Well, in some ways the algorithmic feed is good for consumers. Tik tok is hyper-adaptive to my interests – it really is great at finding and surfacing content that is interesting and relevant to me.

But also, it’s complicated. As has been discussed by others, the engagement measures that platforms are able to optimize against–and the basis that creators are then “competing” on–are limited, and don’t fully reflect the interests of consumers. Consequently, the content that results from this competition is likewise limited in benefiting consumers (more on this below).

And further, algorithmic prediction has, perhaps, become “too good”. Platforms like Tik Tok are addictive for viewers. Like cigarettes or sugary drinks, algorithmic content platforms increasingly exploit the behavioral limitations of consumers, including their limited self-control. Even if I know that it would be better to put down my phone and go to bed, I can’t. I am indeed “engaged” in the sense of viewing content, but it is to my detriment (after a point).

Finally, what about the consequences of all this for the type of content that is produced online? For our digital cultural production?

There are lots of answers here. According to Jack Conte & Patreon, the algorithmic feed forces creators to produce “content” rather than “art” – to focus their efforts on pleasing the algorithm and its invisible preferences (which might change at the whim of an engineering team in San Francisco). We all know about some of the consequences of this – clickbait, the Mr. Beastification of YouTube, the all-important “hook” on Tik Tok, and so on. These are realities of the modern internet which result from how user behavior collectively interacts with algorithmic curation. But they are not necessarily features of our digital cultural products that many of us would choose in a vacuum.

I think one way to conceptualize this aspect from a more economic perspective is to note that the algorithmic curation systems created by platforms are not manipulation-proof – there are relatively cheap (to creators) ways to move up the algorithmically-curated quality rankings without actually having to make the (far more costly) investment in improving quality. For example, I can just change the title of my video to make it more clickbait-y, or start my video with a scintilating hook that will increase view-time (even if it annoys viewers long term) and move me up the platform rankings; hence, creators facing these incentives rationally invest in these tactics.

Surely, there are also more subtle consequences. Among other things, the algorithmic fracturing of the viewership communities that once formed more-stably around creators affects the type of content that creators produce. For example, on Tik Tok, it is not uncommon to see a creator who finds a video formula that “works” on the platform, and proceeds to make repeat video after video with minor variations on the formula, since each continues to get views. This type of thing is more encouraged in an algorithmic world, where there is a much broader swath of potential viewers, and less of a stable audience that will get bored with the repetition. Likewise, the viewer-side costs of “clickbait” and other manipulation tactics seem to be less internalized to creators in an algorithmically-curated world, facilitating their proliferation.

There are potentially postive effects on content quality as well. In the algorithmic world, content production is “lower stakes” in a sense. If a piece of content turns out poorly, or is only interesting to a small segment of viewers, that is “okay” from the creator’s perspective. The algorithmic platform will take care of serving it to the viewers that are interested, and hiding it from the rest. The creator has less risk of “polluting” their followers’ feed and potentially alienating them or causing them to unsubscribe. These lower stakes suggest that creators are free to produce a wider range of content. This could be good for the overall quality of content, since the quality of creative products is difficult to predict ex ante. As such, encouraging creators to throw more spaghetti at the wall may have benefits.

Conclusion

Where does this all leave us? Are we collectively better off with the algorithmic feed? It’s hard to say. Per above, there are good things and bad things, which are sometimes at odds for different stakeholders.

On at least some of the negatives consequences outlined above, there are clear roles for policy. For example, just as we might regulate cigarettes or ban sugary drinks, we can likewise take action against algorithmically-amplified digital addiction.

However, when it comes to questions about the type and quality of content that the algorithmic feed induces, it is harder to say. If Jack Conte is right, and creators strongly prefer having tighter connections with their audience, and having the scope to make “art”, then perhaps Patreon’s new products will win out. But of course, the challenge is viewers, who may be stuck on the ever-engaging Tik Tok feed.

I suppose that we will have to see. If you have thoughts, email me at fossj117 AT gmail DOT com.

Footnotes

  1. This post reflects my working theory (i.e. hypothesis) about a broad trend that we have seen in content creation platforms, and how it connects to a range of other phenomena in the space. There is a lot more work that could be done (and that I would like to do) to test/validate/clarify this theory and make it more precise. This includes both theoretical economic modeling, and empirical analysis (both quantitative and qualitative).