14 reasons why #DeleteFacebook

Over the course of August 2020, I shared why I was leaving Facebook on Facebook. My intent was to set a deadline for exchanging contact information and to raise awareness of the harm Facebook inflicts.

While I am unaware of anyone being convinced by my campaign, I hope more people join me in deleting Facebook. You are free to share and remix these images.

I captioned each image post with:

I’m deleting my Facebook account on September 1, but I’m not unfriending you. Let’s exchange contact info here: https://www.jeremiahlee.com/posts/delete-facebook/

Jump to entry: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14

I am deleting my Facebook account on September 1, 2020. I will post throughout August explaining my reasons why. I believe Facebook is harmful to society. People who work at Facebook deny or refuse to understand the problem and therefore cannot fix it. The appropriate response to something systemically harmful is to stop participating in the harm. By using Facebook myself, I give you a reason to keep using Facebook. I want you to have one less reason to use Facebook. I hope your other friends also leave Facebook until you no longer go to Facebook because there is nothing there to see.

Facebook doesn’t sell ads. Facebook sells views of ads. If no one views the ads, Facebook doesn’t make any money. Facebook needs us and our attention. Using Facebook in any capacity literally funds the spread of hate and disinformation. It’s on us—not advertisers—to defund Facebook. Companies only advertise on Facebook because we’re on it. It’s time for me to no longer be on it.

Regardless of how you feel about whether Facebook should fact check disinformation or remove hate speech, you cannot deny that Facebook is an effective method for spreading disinformation and hate speech. The more hateful the content is, the more likely people are to spend time debating it. Facebook’s acceptable content policy is determined by our threshold of what is too offensive to engage with and advertisers’ threshold of being seen in proximity to that content. Hateful, but not too hateful, content is in Facebook’s business interest. The business model is inherently misaligned with what is good for humanity.

You might say, “But I don’t see hate speech or disinformation on Facebook—and I don’t post it either!” If true, you are fortunate. That hasn’t been my experience. Nearly every time I open Facebook, I encounter posts that can be disproven easily. Most often, people do not remove the post after someone points out the falsehood or mischaracterization.These are smart, loving people who share lies. They aren’t liars, but they have become ok with sharing lies because the lies align with their worldview and Facebook has designed an experience that makes them feel good when friends “like” the lies. I do not believe Facebook intentionally designed this, but Facebook hasn’t done anything to address the unintentional consequence. Facebook employs smart and creative people who can design an experience that values truth and encourages respectful discourse—but it hasn’t and Mark Zuckerberg refuses to acknowledge this as a problem worth addressing.

Facebook brings out the worst in us. You likely have received a comment on a post of yours that was rude, made you feel bad, or made you feel defensive. People say things online they would never say to your face. If we’re honest, many of us have also been the person writing the unnecessary comment at some point. Facebook lowers the immediate consequence of behaving poorly. Worse, our empathy and compassion sink even lower when we encounter a post we disagree with from someone we don’t even know. Facebook allows us to say things without having to acknowledge the humanity of the other person. This is the result of poor product design. There are many mechanisms Facebook could introduce to create a more civil society—but it optimizes for increasing engagement because engagement means more ad viewings. I think Facebook is aware that not all engagement is pleasant. It just doesn’t care enough to risk a reduction in revenue.

In a randomized experiment, a study found that people who deactivated Facebook for 4 weeks experienced reduced political polarization and increased subjective well-being, reporting higher levels of life satisfaction and lower levels of depression and anxiety. That change is equal to about 25-40% of the beneficial effect typically reported for psychotherapy. Facebook literally is making society more polarized, more anxious, and less happy. Products that cause harm, whether unintentional or not, are recalled to stop the harm. Facebook has been proven to cause people harm, but no government consumer protection action has been taken to force Facebook to fix its flawed design and add safety features. To stop Facebook’s harm, we must stop using Facebook. Even if Facebook has not harmed you, your use of Facebook gives other people a reason to use Facebook and they might be harmed by Facebook. It’s time for us to take responsibility for our own and each other’s wellbeing. It’s time for us to stay connected some way other than using Facebook.

Source

Social psychologist Leon Festinger observed that people are naturally inclined to engage in social comparison. To answer a question like, “Am I doing better or worse than average?” you need to check out other people like you. Facebook is a quick way to engage in social comparison. A 2012 study found that Facebook users tend to think other people lead happier lives than their own, leading them to feel that life is less fair. Disinformation isn’t only about facts and elections. It can be about the false perspectives we develop from filling in incomplete information with idealized assumptions. Facebook is not responsible for humans’ innate social comparison behavior, but it is responsible for designing a product that amplifies the frequency and harm from it.

Source

Facebook literally is profiting from harming democracy. Mark Zuckerberg explicitly stated Facebook will not take any action against ads containing misleading or false information if they are run by a political group or candidate. If Facebook believes it cannot be a fair arbiter of truth, it should prohibit all political ads, like Twitter. Facebook is aware of the harm its product caused in the last election and is fully prepared to profit from disinformation campaigns again—this time with eyes and pockets wide open. “Facebook promised—but never delivered—reforms after Russian troll farms used the platform to spread disinformation on an industrial scale to help elect Donald Trump in 2016. Today, domestic actors exploit Facebook’s targeting tools and advertising policies to spread election disinformation, and the social media giant continues to profit from content designed to sway the 2020 election by discouraging people from voting.” Yosef Getachew, Common Cause Media & Democracy program director

Source

“Time and again, Facebook brazenly reveals its danger as an out-of-control, largely unchecked monopoly. The fact it is allowing demonstrably false political ads designed to suppress the vote is the latest example of the threat the company poses.” Mark Stanley, director of communications for Demand Progress. “People everywhere are angry that Facebook is profiting by allowing candidates to place demonstrably false political ads that sabotage our democracy by spreading disinformation about vote-by-mail during a pandemic.” Trent Lange, President of the California Clean Money Action Fund. “[Facebook] is being weaponized to spread hate and violence, harm vulnerable communities, and undermine our democracy…” Vanita Gupta, chief executive of the Leadership Conference on Civil and Human Rights. If Facebook won’t change, then we must change. The only way to get Facebook’s attention is to stop using it because that’s the only way Facebook loses revenue.

Source

When you use Facebook, you fund the spread of Holocaust deniers’ lies. This isn’t hyperbole. Facebook’s stated acceptable content policy is to not remove disinformation. Facebook makes money when you open the app and see an advertisement. Even if you never see the Holocaust denial, it’s on Facebook’s servers that Facebook can only operate because of the revenue you produce for it. “When a user follows public pages containing Holocaust denial content, Facebook actively promotes further Holocaust denial content to that user.” —2020 investigation by the Institute for Strategic Dialogue, a UK-based counter-extremist organization

Source

According to a Buzzfeed News report, Facebook fired an employee who collected evidence of right-wing pages getting preferential treatment. Facebook requires news sources to not publish misinformation more than twice in 90 days to stay on the news tab. However, Breitbart received special treatment that helped it avoid running afoul of Facebook’s policy. Misinformation strikes against Breitbart were removed by Joel Kaplan, Facebook’s vice president of global public policy, who served in President GW Bush’s administration and publicly supported Brett Kavanaugh’s Supreme Court nomination. Similar interventions were found exclusively for right-wing outlets and figures, including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University. It is impossible to be impartial. It is clear which side Facebook’s executives have picked. Now, it’s time to defund Facebook by deleting our accounts.

Source

In 2014, a multi-day riot resulting in 2 people dying and 20 people being injured started with a false post on Facebook that was reshared many times. When Internet connectivity significantly increased in the country in 2014, monks and ultra-nationalist organizations who used to distribute pamphlets with disinformation moved their efforts to Facebook. Rumors began spreading at an alarming rate. Facebook only had one person responsible for reviewing content written in Burmese at the time. Despite people reporting content for violating Facebook’s policies, hateful posts weren’t getting removed. This specific riot would not have happened without Facebook. Facebook did not have a booming ad business in Myanmar, but was able to operate there because people in the rest of the world funded its operation. How many steps removed do you need to be to not feel like you have blood on your hands as a Facebook user who funded Facebook’s irresponsible expansion in non-profitable places?

Source

An Indian politician posted on Facebook that Muslim immigrants should be shot. Facebook employees concluded the politician should be banned for violating the company’s hate-speech rules and his off-platform activities that could lead to real-world violence. Facebook banned Alex Jones, Louis Farrakhan, and white supremacist organizations with the same justification: enabling these people to spread their hate with Facebook would have grave consequences in the physical world that Facebook would be complicit in. But the employees were overruled by Facebook’s public-policy director for India with the justification that banning this person would be bad for Facebook’s ad business in India and could result in India banning Facebook. Facebook had to pick a side and it did. Facebook had people trying to do the right thing in this case, but Facebook’s executives explicitly picked profit over people’s lives. If a business model cannot withstand ethical consideration, the business model is bad. Facebook will only change its behavior when its revenue is negatively affected. It’s time to delete Facebook.

Source

If I can’t convince you to delete Facebook, why do you think you will be able to convince anyone about anything actually important on Facebook? You’re just funding Facebook’s toxic behavior.

Photo of Mark Zuckerberg © Anthony Quintano. Used under a Creative Commons Attribution 2.0 Generic license.