Deep Bubbles: How Personalized Media Fragments Reality

As someone interested in healthy political discourse, I’m glad to see that social media “filter bubbles” and “echo chambers” have become a focus of empirical research.  In a recent blog post, Cristian Vaccari summarizes this research and presents some findings of his own.  Drawing on online surveys of internet users in Germany, Italy and the UK, he calculates how often people agree and disagree with political statements encountered online, offline and via the mass media.  He finds that 1) social media users encounter more opposing than supportive opinions online, and 2) more opposing opinions online compared to face-to-face political conversations.  From this, he concludes that ideological filter bubbles aren’t as prevalent as media accounts have indicated.

Now, I don’t dispute Vaccari’s findings: I suspect he successfully measured what he set out to measure.  I’d like to argue, though, that such findings alone don’t indicate that media personalization isn’t a problem.  Instead, they suggest that we need a more nuanced conception of what the filter bubble effect might involve.  Particularly, we need to realize that filter bubbles aren’t just (or even primarily) about the political opinions one encounters.  Instead, they’re about the fragmentation of the background knowledge out of which we form our political opinions.

I’ll use an example to explain what I mean.  In an effort to escape my particular left-leaning bubble, over the past few months I’ve been reading the Fox News website.  Of course, in doing so, I encounter political opinions with which I disagree.  For example, there might be an op-ed piece arguing for increased military spending.  This policy position is in direct opposition to what I believe should occur.  It is thus the type of encounter Vaccari set out to measure.

Now, while such direct opposing claims exist on the Fox News website, the overwhelming majority of information on the site does not consist of such claims.  Instead, it consists of simple “factual” reporting.  And this is where things get strange.  In short, the world presented by Fox News, I’ve found, is completely different than the one presented by CNN or the New York Times.  Whereas the latter will be covering perceived turmoil in the Trump White House, for example, the former will feature three articles in a row about crimes committed by illegal aliens.  Integrally, neither set of stories are “fake news” nor “political opinions” of the kind Vaccari attended to.  Instead, they are the raw materials out of which we form our political opinions.  Exposed to talk of White House turmoil (background), you may conclude that Trump should not be president (opinion).  Exposed to talk of illegal immigrants and crime (background), you may conclude that the US needs a tougher immigrant policy (opinion).

So different news outlets put different spins on the day’s news.  Of course.  But how does this relate to social media?  The connection, I would suggest, lies in the logic which animates both Fox News, and Facebook, Twitter, etc.  In short, Fox News is perhaps the greatest manifestation on the mass media-level of what we can call the “consumer impulse.”  Fox is particularly adept at providing its viewers with the kind of content that will drive repeated engagement, at “giving customers what they want,” in other words.  And what they want, like most humans, is information which confirms (or at least does not contradict) their existing beliefs and inclinations.  Hence, a non-stop stream of stories about treacherous immigrants and “good guys with guns.”

Moving from mass media to social media, we can see how a similar dynamic exists when information is shared within social networks.  If you are the type of person who often reads, likes or comments upon articles about crimes committed by illegal immigrants, social media platforms—in line with your wishes as a consumer—will show you more information about that topic.  They will also suggest that you connect with people who share similar engagement patterns.  Information will be exchanged about your topic of shared interest.  The beliefs and inclinations you started with will thus be confirmed and intensified.  Integrally, this often occurs without the exchange of overt political messages.  It’s simply like-minded people sharing information about “how the world is.”

Let’s look at an example of the type of information exchange I’m talking about.  Consider the following:

Facebook on the Budget

Though from an obviously politically-invested source, this message, on its face, contains very little that could be classified as a political opinion (ICE likely wouldn’t call their database “invasive,” but may very well agree with the facts of the case).  If I like, share or comment upon this post, I will likely be exposed to more content which emphasizes the perils of the surveillance state.  If my followers “like” my shared content they will, in turn, also be exposed to additional anti-surveillance content.  At the same time, I will be encouraged to share more such content (because I want social approval in the form of likes).  The end result is that all parties in a network—sorted by their original consumer preference (EG, an inclination to worry about surveillance)—will be exposed to an increasingly intense stream of information about the dangers of surveillance.  This will, in turn, lead to the formation and/or sedimentation of anti-surveillance beliefs.

Keep in mind that there’s no “fake news” or overt political opinions involved in the above process.  There really are, out in the world, problems associated with state surveillance.  Because of the consumer impulse, though, the narrative which dominates within any network will tend to highlight only one part of a much more complex story.  This is because those who prefer other narratives (that state surveillance is necessary, say) will, through the same process of sharing and liking, create their own networks around the same topic (or, as is perhaps more common, ignore the topic all together).  As a result of this segregation process, certain problems and issues will loom large in the imaginations of certain segments of the population.  And barely register in other segments.  The result is widely divergent ideas about the state of the world and in turn, what political opinions are valid.

Two additional features of this sorting process are of note.  First, is the importance of repetition.  Exposure to one or even a few articles about a topic doesn’t necessarily shape your opinion.  Instead, it is the relentless drumbeat of similar missives.  Social media is particularly insidious because of its ability to deliver many different messages, over a long period of time, tilted in one direction.  The exposure process is also highly automated.  In a high-stimulation environment like a Facebook page, consumers can’t consciously register most of what they see.  As such, unless we’re particularly committed to a topic, articles and comments about surveillance or criminal immigrants structure our thinking without us even knowing.  Certain views suddenly just appear obvious or “commonsense.”

To summarize, the filter bubble concept must be understood to include not just the expression of overt political opinions, but the background information out of which opinions are formed.  Many different, yet not necessarily contradictory narratives are in circulation.  Media personalization, which reaches its zenith in the form of social media, allows us to choose the narratives we like.  It then works to reinforce our choices.  Any understanding of “filter bubbles” or “echo chambers” must take this dynamic into account.