A thread on filter bubbles, confirmation bias, design against misinformation, and social media content policy. Or: how can people really think that the U.S. election was rigged, and is it social media's fault. đź§µ
If you are reading this tweet, it is possible that you literally don't know a single person who voted for Donald Trump. Meanwhile, I know a couple of people who likely literally don't know a single person who DIDN'T vote for Donald Trump, besides me.
It's not like this is new - 30 years ago the same might be true just because all your friends live in your local community - but the internet makes us FEEL like we KNOW so many more people, and that we have a broader view of the world.
"I see thousands of people posting on Facebook every day and not a single one of them voted for Joe Biden." It might be easy to extrapolate from that that the election results can't be real, because... seriously, who are these people who voted for Biden?! You've never seen them!
And the fact that so many people say they get their news from social media like Facebook isn't an ALGORITHM problem. It's because PEOPLE are choosing to have their news curated for them by the people and groups they choose to follow.
And this isn't a conservative vs liberal thing. Again, many people were also completely shocked by how close the election was because they see thousands of people posting on Facebook who all voted for Biden. Filter bubbles all the way down.
But when we're talking about the incredible impact of confirmation bias - people believe things that reinforce what they already think because who wants to be wrong??? - it's even more impactful when everyone around you ALSO believes it. You can't all be wrong!
So do you really think that Twitter labeling a tweet as false information will make someone say "oh, well if Twitter says that, it must be true"? Because meanwhile, everyone you follow is telling you that you're right. Not that you're wrong - or worse, that you're stupid.
I think that this kind of labeling can absolutely make a difference for smaller things that you may not have known were true or false, but if you've made up your mind about the election, or climate change, or whatever, I find it unlikely Twitter's label will make you think twice.
So this brings me to a couple of opinions:
(1) Sure, social media is a big part of what's happening here, though I think that the much harder problems are about people rather than algorithms. How do you get people to believe things that prove them wrong?
(2) When content, including misinformation, is dangerous enough, a label is insufficient. If you're not going to be able to change people's minds, then the only option is to reduce the spread of that content such that it doesn't become further confirmation of an idea.
Anyway, this is just my current stream-of-consciousness thoughts on the matter, also influenced by my own interactions with some very conservative acquaintances who believe the election was fraudulent.
And it's a tangle of issues that create a perfect storm: some are about social media and some aren't.

and h/t @Greene_DM for an optimistic look at filter bubbles specifically. My thoughts not intended to reflect empiricism. :) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2758126
My concerns are actually more about the people that you surround yourself with than news sources. I've just heard "everyone I know voted for Trump so there's no way he didn't win" a LOT lately.
You can follow @cfiesler.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.