How Facebook decides if a post is hate speech and whether it will be deleted

Facebook has released a statement detailing how it identifies a post as hate speech ,and whether or not it should be deleted.

According to the statement, the company deletes over 66,000 “hate speech” posts per week, with 288,000 posts deleted a month. It also committed to add an additional 3,000 moderators in the next year to deal with the increasing number of hate posts, bringing the total number of moderators up to 7,500.

Defining Hate Speech

“Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease,” said Richard Allan, VP EMEA Public Policy.

“There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.”

Citing countries like Germany, he noted that posting hate speech online can get you arrested while in the US, “even the most vile kinds of speech are legally protected under the US Constitution.”

Enforcement

According to Allan, Facebook removes around 288,000 posts a month globally marked as hate speech.

This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesn’t include posts reported for other reasons but deleted for hate speech.

However he once again noted that like defining hate speech, choosing whether or not to delete a post was not necessarily a simple task.

“Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people,” he said.

“But sometimes, there isn’t a clear consensus — because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear. Language also continues to evolve, and a word that was not a slur yesterday may become one today.”

Allan noted that this discretionary enforcement also tied into other themes such as the context behind the post as well as the intent behind it. This can be anything from using a perceived slur in a self-deprecating manner to using less innocuous words which clearly incite.

Mistakes

When we remove something you posted and believe is a reasonable political view, it can feel like censorship.

“We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully,” said Allan.

“Our mistakes have caused a great deal of concern in a number of communities, including among groups who feel we act — or fail to act — out of bias.”

“We are deeply committed to addressing and confronting bias anywhere it may exist. At the same time, we work to fix our mistakes quickly when they happen.”


Read: Google fined record €2.4 billion for manipulating EU search results

Latest news

Partner Content

Show comments

Recommended