Category Archives: Social

Content and Networks

Josh Dzieza wrote an interesting profile of The Awl recently.  The piece turns on the The Awl’s success despite its size and anachronistic approach: it is small and actively resists growth; its content is quirky and topically disparate.  Yet the site has generated a dedicated following of smart, media-savvy readers, and has served as the launchpad for a number of successful journalists.

The Awl appears to be an anomaly of the modern internet and, Dzieza notes, nothing highlights that point more than one of its own articles. “The Next Internet is TV” appeared in February of this year and laid out a bleak future for digital content. In it, John Herrman argues that in order to survive, digital publishers will have to optimize their content to gain favor with curation algorithms used by websites with large networks (e.g. Facebook). Herrman shows that publishers have already made concessions. Take, for instance, Buzzfeed1 and The Daily Beast. The strategies of both sites — from subject matter to copy to titles — pay close attention to social’s algorithmic distribution channels.

The fear for Herrman is that eventually content will fall into a pattern of adapting rather than innovating, creating content that is at once algorithmically pleasing but intellectually derivative. Herrman lays out the dynamic behind his angst:

“The publication industry is now in a competition with social platform companies, much in the way programming companies were once in competition with cable operators.”

This analogy is useful, but not without fault. Specifically, the analogy is strong with respect to news, ‘trending’ topics, etc., but is weak with respect to content that is long-form, niche, etc. This is because the programming/cable battle was a war for access, while the publication/social platform battle is a war for visibility. Content sits along a spectrum of time-sensitivity, but thinking of it in binary (Fast and Slow) can be a good thought experiment. Fast content’s relevancy is confined to a small window of time, making visibility exceedingly important. It is interesting to an almost-universal audience for a short period of time. But such content — often news, though sometimes questionably so — has been commoditized, and requires little non-technical skill to create.  Slow content, on the other hand, is more often differentiated by quality, and its relevancy is less vulnerable to algorithmic ranking or temporal viewership.2

Perhaps most importantly, the algorithmic overlords have a clear incentive to make sure that Slow content finds its readers. In the early days of Google, internet search was rather unsophisticated and results were vulnerable to manipulation. Content was sometimes3 designed specifically to game search results rather than provide useful information. Google understood that poor search results were undesirable to its users and made improvements to its indexing and search algorithms. The same dynamic that pushed Google to seek better content will force networks like Facebook to do the same.4

The above does not mean that the media business is free of challenge: it is among the most deeply affected industries of the socially-connected information-technology revolution. Adapting to life on the web will probably lead to some uncomfortable content and distribution arrangements. There is also a greater risk of algorithms displaying content that matches a user’s existing biases. But these concerns are part of an effort made by platforms to connect their users with a diverse set of content. There’s no obvious reason to conclude that they will necessarily lead to a degradation in quality.

 


1. Felix Salmon published an incredible interview with Buzzfeed founder Jonah Peretti in June of last year. Before reading that article, this blog had dismissed Buzzfeed as an outlet that sprayed the Web with derivative listicles, hoping some would stick. In fact, Buzzfeed’s strategy is a thoughtful, experimental, and iterative process. Peretti has tremendous insight into how content is created and shared in the age of social, and the Salmon interview is probably the best representation of that.^

2. The Monday Note recently noted how difficult it is for algorithms to find meaning outside of numbers. Notably, automated translation services are awful and Apple’s music service will make use of human “DJs” in order to encourage new music discovery among users.^

3. Often?^

4. Luckily, algorithms are pretty bad at finding meaning (Footnote 2!). It should be noted that Google’s feats of web crawling have not happened without human input. Users constantly give feedback via clicks, bounces, etc. Each user action may not contain much information (a data point being a click or no click, given some search input and results), but the aggregation of hundreds of millions of users and billions upon billions of actions adds up to real information.

There also remains a certain celebrity to the human curation of big-ticket journalistic outlets. The reason may be that human curation is agnostic to individual preferences, though it may be sensitive to aggregate readership views. Ironically then, human curation may be the biggest threat to toothless content.^