If youve got one of these recently, you may be a black activist.
If you've received one of these recently, you might be an activist. Screenshot from Facebook

On Thursday morning, Facebook removed a post that Seattle-based black activist and artist Natasha Marin had composed over a year ago. Then they banned her from the site for three days.

According to Marin, the social media platform had thrown her in "Facebook jail" for violating the network's Community Standards by posting a screenshot of a racist message she had received on her Reparations project page. This project, which is now a well-attended Facebook group where people of color can make requests for willing white people to fulfill, draws so many racist comments from other Facebook users that Marin has constructed a "Shrine Of Asshats" page to display them. Needless to say, black and brown people reposting hate speech as a way of calling attention to the amount of vitriolic slime they have to swim through on a regular basis isn't the same as a white person using hate speech to hurt people, but FB banned her anyway.

The ban is a particularly harsh punishment for Marin, who uses Facebook to promote the classes she teaches to pay the bills. She also runs her Reparations project on the platform and manages the Seattle People of Color Salon, a social network for Seattle POC.

After I e-mailed Facebook to ask why Marin had been banned, the company reviewed their decision, recognized that they made a "mistake" in taking down Marin's post, rescinded the ban, and apologized.

However, their apology included a warning. While Facebook did admit to making a mistake in removing the post they removed, some of racist comments Marin screen-grabbed and reposted contained the e-mails of those racist commenters. Since sharing others' personal information without their consent violates FB's Community Standards, Marin was also warned not to post personal information—which she'd done on a few other occasions—or else she'd be temporarily banned again.

Sorry.
Sorrynotsorry. Screenshot from Facebook

Facebook also recently temporarily banned and then later apologized to Ijeoma Oluo for posting screenshots of racist comments people were hurling at her. According to Devin Coldeway over at Tech Crunch, there have been several other similar instances:

We talked with another activist recently, Leslie Mac, who like Oluo spoke out on racism using the platform, and like Oluo was suspended from it. It happened to Shaun King, too, after he posted a racist email he received.

A spokesperson at Facebook recognizes this pattern, but told me that they're not intentionally targeting black activists.

Though publishing hate speech violates FB's Community Standards, they do recognize that people such as Marin use that speech in an empowering way to draw attention to racism. Facebook allows those uses, but they also make mistakes in that arena.

To help me understand how those mistakes are made, the spokesperson walked me through the banning process.

What happens is this: Someone on Facebook posts a bit of hate speech because they're fucking terrible or someone reposts that hate speech to show how terrible that fucking person is. Then some other user reports that post (or in some cases that whole page) to Facebook.

Once the offending speech is reported, a member of Facebook's content review team takes a look. There are currently 4,500 people employed on this team, but the spokesperson told me the company plans to hire 3,000 more people soon. This global team works 24/7 in 40 different languages to sift through millions of reports each week. They're trained up on implicit bias when they're brought on and retrained throughout their tenure. When someone makes a mistake, as they did in Marin's case, Facebook explains to that specific reviewer where they made the mistake and trains them in that area again. Some mistakes are incorporated into training decks to be used for future employees. The spokesperson also told me Facebook audits implicit bias trainings to make sure the trainers aren't making mistakes themselves—language and the way people use it changes all the time, and they strive to keep up with the trends.

This team of 4,500 content reviews include among their ranks, according to the spokesperson, people who are considered experts in hate speech, child safety, and terrorism. When a user flags a post or a page, it goes to the person or group most familiar with the language and culture of the offending item.

Reviewers make mistakes when they don't fully understand the context of the post, which can be difficult if the user doesn't include a caption specifying why she's sharing the post, or if the reviewers can't properly stalk the user. Depending on the user's privacy settings, reviewers might not be able to tell if the user is a black or brown person who's been the victim of racism or a white person who's being a fucking stupid idiot racist baby.

A few nuances here: Reviewers prioritize posts that have been reported tons of times, and they tend to put posts the team has already ignored tons of times on the back burner.

If reviewers determine that a post violates the Community Standards, then they give the user a warning, remove the post, or ban the user for a certain number of days depending on the severity of the situation. Users who have been warned a lot are more likely to get banned. If someone’s profile or page has been removed, they can appeal the decision.

In Marin's case, the reviewer messed up when they removed the post that they removed. But when they re-reviewed other posts, they determined that Marin had shared e-mails of people who wrote racist comments and so they temporarily disabled her account. Marin told me she did not have the option to appeal that decision.

The spokesperson told me that Facebook doesn't like when users dox each other because it makes everyone more vulnerable to abuse. If Marin had wanted to repost a racist comment and blur out that commenter's e-mail, that would have been fine.

According to the spokesperson, Facebook knows the apparent pattern of disabling the accounts of black activists doesn't look good, but insists their actions are not intentional. Moreover, the content review team feels terrible when those mistakes happen.

In addition to hiring more reviewers, Facebook is looking at how their policies can address this problem better. The spokesperson says they're having more conversations than ever with marginalized communities who feel like they're not being treated fairly, but admits that they're just not there yet in terms of addressing their issues.

So, no. According to a Facebook spokesperson, there are no racist bots banning black activists because of some robotic misunderstanding of cultural context. There are just human beings on a content review team deciding that some posts by black activists violate their rules on hate speech, and then being terribly wrong about that.

But when they're not terribly wrong, they really let those racists have it by banning them for three whole days. That'll teach 'em.