• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Facebook ‘filter bubble’ study raises more questions than it answers

By
Mathew Ingram
Mathew Ingram
Down Arrow Button Icon
By
Mathew Ingram
Mathew Ingram
Down Arrow Button Icon
May 7, 2015, 10:10 PM ET

Facebook has been criticized for some time for its role in creating a “filter bubble” among its users, a term that comes from a book of the same name by Eli Pariser (who went on to help create viral-content site Upworthy). Critics say the social network does this by shaping our perception of the world with its algorithmically-filtered newsfeed. Facebook, however, has come out with a study that it says proves this isn’t true—if there is a filter bubble, the company says, it exists because users choose to see certain things, not because of Facebook’s algorithmic filters.

But is this really what the study proves? There’s considerable debate about that among social scientists knowledgeable in the field, who note that the conclusions Facebook wants us to draw—by saying, for example, that the study “establishes that … individual choices matter more than algorithms”—aren’t necessarily supported by the evidence actually provided in the paper.

For one thing, these researchers point out that the study only looked at a tiny fraction of the total Facebook user population: less than 4% of the overall user base, in fact (a number which doesn’t appear in the study itself but is only mentioned in an appendix). That’s because the study group was selected only from those users who specifically mention their political affiliation. Needless to say, extrapolating from that to the entire 1.2 billion-user Facebook universe is a huge leap.

Sociologist Nathan Jurgenson points out that while the study claims it conclusively proves individual choices have more effect on what users see than algorithms, it doesn’t actually back this up. In fact, while that appears to be the case for conservative users, in the case of users who identified themselves as liberals, Facebook’s own data shows that exposure to different ideological views is reduced more by the algorithm (8%) than it is by a user’s personal choice.

But even that’s not the biggest problem, Jurgenson and others say. The biggest issue is that the Facebook study pretends that individuals choosing to limit their exposure to different topics is a completely separate thing from the Facebook algorithm doing so. The study makes it seem like the two are disconnected and can be compared to each other on some kind of equal basis. But in reality, says Jurgenson, the latter exaggerates the former, because personal choices are what the algorithmic filtering is ultimately based on:

“Individual users choosing news they agree with and Facebook’s algorithm providing what those individuals already agree with is not either-or but additive. That people seek that which they agree with is a pretty well-established social-psychological trend… what’s important is the finding that [the newsfeed] algorithm exacerbates and furthers this filter bubble.”

Sociologist and social-media expert Zeynep Tufekci points out in a post on Medium that trying to separate and compare these two things represents the worst “apples to oranges comparison I’ve seen recently,” since the two things that Facebook is pretending are unrelated have significant cumulative effects, and in fact are tied directly to each other. In other words, Facebook’s algorithmic filter magnifies the already human tendency to avoid news or opinions that we don’t agree with.

“Comparing the individual choice to algorithmic suppression is like asking about the amount of trans fatty acids in french fries, a newly-added ingredient to the menu, and being told that hamburgers, which have long been on the menu, also have trans-fatty acids.”

Christian Sandvig, an associate professor at the University of Michigan, calls the Facebook research the “not our fault” study, since it is clearly designed to absolve the social network of blame for people not being exposed to contrary news and opinion. In addition to the framing of the research — which tries to claim that being exposed to differing opinions isn’t necessarily a positive thing for society — the conclusion that user choice is the big problem just doesn’t ring true, says Sandvig (who has written a paper about the biased nature of Facebook’s algorithm).

“The tobacco industry might once have funded a study that says that smoking is less dangerous than coal mining, but here we have a study about coal miners smoking…. there is no scenario in which user choices vs. the algorithm can be traded off, because they happen together. Users select from what the algorithm already filtered for them. It is a sequence.”

Jurgenson also talks about this, and about how Facebook’s attempt to argue that its algorithm is somehow unbiased or neutral ­— and that the big problem is what users decide to click on and share — is disingenuous. The whole reason why some (including Tufekci, who has written about this before) are so concerned about algorithmic filtering is that users’ behavior is ultimately determined by that filtering. The two processes are symbiotic, so arguing that one is worse than the other makes no sense.

In other words, not only does the study not actually prove what it claims to prove, but the argument that the site is making in defense of its algorithm also isn’t supported by the facts—and in fact, can’t actually be proven by the study as it currently exists. And as Eli Pariser points out in his piece on Medium about the research, the study also can’t be reproduced (a crucial element of any scientific research) because the only people who are allowed access to the necessary data are researchers who work for Facebook.

Correction, May 8, 2015: An earlier version of this post misidentified Christian Sandvig. He is an associate professor at the University of Michigan.

About the Author
By Mathew Ingram
See full bioRight Arrow Button Icon

Latest in Tech

NewslettersTerm Sheet
Key questions to stay grounded in the AI frenzy
By Alexei OreskovicDecember 9, 2025
1 hour ago
David Ellison speaks with intensity while sitting on a panel
SuccessDavid Ellison
When David Ellison was 13, his billionaire father Larry bought him a plane. He competed in airshows before leaving it to become a Hollywood executive
By Dave SmithDecember 9, 2025
2 hours ago
Jesse Levinson, co-founder and chief technology officer at Zoox, speaking at Fortune Brainstorm AI 2025 in San Francisco. (Photo: Stuart Isett/Fortune)
NewslettersFortune Tech
Zoox’s road to revenue begins to materialize
By Andrew NuscaDecember 9, 2025
2 hours ago
Mark Zandi, chief economist at Moody's Analytics, pictured in May 2023, warns of record debt issuance by AI companies during an infrastructure boom.
Big TechTech
Borrowing by AI companies represents a ‘mounting potential threat to the financial system,’ top economist says 
By Nino PaoliDecember 9, 2025
5 hours ago
Man in dark jacket sitting on a chair
AIBrainstorm AI
Amazon’s new Alexa aims to detangle household chaos, like who fed the dog and the name of that restaurant everyone wanted to try
By Amanda GerutDecember 9, 2025
5 hours ago
LawSocial Media
Australia will start banning kids from social media this week—and Malaysia is getting ready to do the same
By Angelica AngDecember 9, 2025
6 hours ago

Most Popular

placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
3 days ago
placeholder alt text
Investing
Baby boomers have now 'gobbled up' nearly one-third of America's wealth share, and they're leaving Gen Z and millennials behind
By Sasha RogelbergDecember 8, 2025
18 hours ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
13 days ago
placeholder alt text
Success
Craigslist founder signs the Giving Pledge, and his fortune will go to military families, fighting cyberattacks—and a pigeon rescue
By Sydney LakeDecember 8, 2025
20 hours ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
3 days ago
placeholder alt text
Success
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
5 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.