Deep Bubbles: How Personalized Media Fragments Reality

As someone interested in healthy political discourse, I’m glad to see that social media “filter bubbles” and “echo chambers” have become a focus of empirical research.  In a recent blog post, Cristian Vaccari summarizes this research and presents some findings of his own.  Drawing on online surveys of internet users in Germany, Italy and the UK, he calculates how often people agree and disagree with political statements encountered online, offline and via the mass media.  He finds that 1) social media users encounter more opposing than supportive opinions online, and 2) more opposing opinions online compared to face-to-face political conversations.  From this, he concludes that ideological filter bubbles aren’t as prevalent as media accounts have indicated.

Now, I don’t dispute Vaccari’s findings: I suspect he successfully measured what he set out to measure.  I’d like to argue, though, that such findings alone don’t indicate that media personalization isn’t a problem.  Instead, they suggest that we need a more nuanced conception of what the filter bubble effect might involve.  Particularly, we need to realize that filter bubbles aren’t just (or even primarily) about the political opinions one encounters.  Instead, they’re about the fragmentation of the background knowledge out of which we form our political opinions.

I’ll use an example to explain what I mean.  In an effort to escape my particular left-leaning bubble, over the past few months I’ve been reading the Fox News website.  Of course, in doing so, I encounter political opinions with which I disagree.  For example, there might be an op-ed piece arguing for increased military spending.  This policy position is in direct opposition to what I believe should occur.  It is thus the type of encounter Vaccari set out to measure.

Now, while such direct opposing claims exist on the Fox News website, the overwhelming majority of information on the site does not consist of such claims.  Instead, it consists of simple “factual” reporting.  And this is where things get strange.  In short, the world presented by Fox News, I’ve found, is completely different than the one presented by CNN or the New York Times.  Whereas the latter will be covering perceived turmoil in the Trump White House, for example, the former will feature three articles in a row about crimes committed by illegal aliens.  Integrally, neither set of stories are “fake news” nor “political opinions” of the kind Vaccari attended to.  Instead, they are the raw materials out of which we form our political opinions.  Exposed to talk of White House turmoil (background), you may conclude that Trump should not be president (opinion).  Exposed to talk of illegal immigrants and crime (background), you may conclude that the US needs a tougher immigrant policy (opinion).

So different news outlets put different spins on the day’s news.  Of course.  But how does this relate to social media?  The connection, I would suggest, lies in the logic which animates both Fox News, and Facebook, Twitter, etc.  In short, Fox News is perhaps the greatest manifestation on the mass media-level of what we can call the “consumer impulse.”  Fox is particularly adept at providing its viewers with the kind of content that will drive repeated engagement, at “giving customers what they want,” in other words.  And what they want, like most humans, is information which confirms (or at least does not contradict) their existing beliefs and inclinations.  Hence, a non-stop stream of stories about treacherous immigrants and “good guys with guns.”

Moving from mass media to social media, we can see how a similar dynamic exists when information is shared within social networks.  If you are the type of person who often reads, likes or comments upon articles about crimes committed by illegal immigrants, social media platforms—in line with your wishes as a consumer—will show you more information about that topic.  They will also suggest that you connect with people who share similar engagement patterns.  Information will be exchanged about your topic of shared interest.  The beliefs and inclinations you started with will thus be confirmed and intensified.  Integrally, this often occurs without the exchange of overt political messages.  It’s simply like-minded people sharing information about “how the world is.”

Let’s look at an example of the type of information exchange I’m talking about.  Consider the following:

Facebook on the Budget

Though from an obviously politically-invested source, this message, on its face, contains very little that could be classified as a political opinion (ICE likely wouldn’t call their database “invasive,” but may very well agree with the facts of the case).  If I like, share or comment upon this post, I will likely be exposed to more content which emphasizes the perils of the surveillance state.  If my followers “like” my shared content they will, in turn, also be exposed to additional anti-surveillance content.  At the same time, I will be encouraged to share more such content (because I want social approval in the form of likes).  The end result is that all parties in a network—sorted by their original consumer preference (EG, an inclination to worry about surveillance)—will be exposed to an increasingly intense stream of information about the dangers of surveillance.  This will, in turn, lead to the formation and/or sedimentation of anti-surveillance beliefs.

Keep in mind that there’s no “fake news” or overt political opinions involved in the above process.  There really are, out in the world, problems associated with state surveillance.  Because of the consumer impulse, though, the narrative which dominates within any network will tend to highlight only one part of a much more complex story.  This is because those who prefer other narratives (that state surveillance is necessary, say) will, through the same process of sharing and liking, create their own networks around the same topic (or, as is perhaps more common, ignore the topic all together).  As a result of this segregation process, certain problems and issues will loom large in the imaginations of certain segments of the population.  And barely register in other segments.  The result is widely divergent ideas about the state of the world and in turn, what political opinions are valid.

Two additional features of this sorting process are of note.  First, is the importance of repetition.  Exposure to one or even a few articles about a topic doesn’t necessarily shape your opinion.  Instead, it is the relentless drumbeat of similar missives.  Social media is particularly insidious because of its ability to deliver many different messages, over a long period of time, tilted in one direction.  The exposure process is also highly automated.  In a high-stimulation environment like a Facebook page, consumers can’t consciously register most of what they see.  As such, unless we’re particularly committed to a topic, articles and comments about surveillance or criminal immigrants structure our thinking without us even knowing.  Certain views suddenly just appear obvious or “commonsense.”

To summarize, the filter bubble concept must be understood to include not just the expression of overt political opinions, but the background information out of which opinions are formed.  Many different, yet not necessarily contradictory narratives are in circulation.  Media personalization, which reaches its zenith in the form of social media, allows us to choose the narratives we like.  It then works to reinforce our choices.  Any understanding of “filter bubbles” or “echo chambers” must take this dynamic into account.

(Digital) Media Literacy: A Cognitive Approach

This semester I’m teaching a course on digital media literacy and as such, have been reading up on some of the foundational texts in the field.  One book I’ve found particularly informative is W. James Potter’s Theory of Media Literacy, A Cognitive Approach.  Here, Potter, author of a noted media literacy textbook, lays out the theoretical foundation for his views.  Though he deals primarily with “old media” (TV, radio, etc.), I think his ideas are quite relevant to digital culture.

Potter’s approach is shaped by cognitive psychology.  He starts from the assumption (reasonable in my opinion) that humans, by nature, seek to conserve mental energy.  This means that most of our interactions with media are automatic, unconscious and habituated.  In info-rich environments, he writes, “our minds stay on automatic pilot,” unconsciously screening out most stimuli (10).  Integrally, though, Potter believes that unconscious exposure can still be influential.  Even when we’re not actively paying attention, messages still get through.  “Over time,” he writes, “images, sounds, and ideas build up patterns in our subconscious and profoundly shape the way we think” (10).

Basically, Potter sees the human mind as a porous entity.  The discursive environment in which we move shapes us whether we like it or not.  To me, this idea rings true.  It explains, for example, the millions of dollars paid to get brand names on sports stadiums.  Per Potter, it’s not about conscious messaging.  PNC, for example, doesn’t want people actively thinking about financial services when they go to PNC park.  Instead, they want mindless, habituated exposure to their trademark.  They want to enter the world of consumers via the side door, so to speak, as not to deal with the trouble of actually proving their services are of value (which they would have to do if their claims were to be consciously considered).

So Potter’s cognitive approach explains the behavior of advertisers.  How might it relate to new media?  Potter writes that media businesses “do not want our attention as much as they want our exposure” (14).  Again, active attention would open media messages up to unwanted scrutiny.  Instead, media-producing businesses want consumers to engage with content mindlessly and habitually.  Does the same dynamic apply in regard to media in which content is user-created?  This is a difficult question.  Certainly, social media platforms like Facebook or Twitter want to get consumers in a pattern of habitual use (and thus exposure to ads).  Likewise, they don’t want consumers thinking too much about those ads, the interface itself or the possible (side)effects of their product.  At the same time, it seems that social media requires a slightly more active consumer.  If a user simply scrolls through her feed and doesn’t create content or engage with other users the platform is deprived of data, hence profit.

Despite the above, it seem that on a cognitive-level, consumer behavior in an old media vs. new media environment would be much the same.  We browse our feeds in default mode, automatically filtering out most content.  That excess content is still there, though, shaping how we understand the world.

Potter is mainly concerned with consumers mindlessly succumbing to the wishes of advertisers and media outlets.  He argues that we are “being trained to tune down our powers of concentration” as to accept secondhand meanings rather than create our own (14).  In regard to new media, we can assume a similar process, but perhaps a more diverse array of influencers.  Certainly Facebook and its advertisers are trying to give you meanings, but so are content generators (your uncle and Russian bots, for example).  How do we make sense of this jumble?  Returning to the idea of mental conservation, we can assume that those meanings that require the least amount of energy to process might be the ones that get through.  This idea would help explain the simplification of discourse common in online environments.  If Potter is right, though, others disparate meanings would still be impacting us (to the extent that they exist in our discursive space).

It’s pure speculation on my part, but perhaps in a world of decentralized content creation, we should think in terms of form rather than meaning.  In other words, rather than focusing on how the circulation of specific meanings may be impacting our lifeworld (Hillary good vs. Hillary bad), it may be more productive to consider the forms those meanings take.  If Potter is right, advertisers, your uncle and the bots will be using similar strategies to get you to buy their messages (E.G., radically simplified discourse).  “Media literacy” would thus become the process of recognizing these strategies and the ways in which– apart from the content pushed– they might shape how we think and act.  It seems to me that Potter’s cognitive approach can help us perform this sort of analysis.