Facebook blocks posts ‘supportive’ of separate Kashmir state, Pakistan’s claim to J&K: Report

Srinagar: Posts on “Azad Kashmir”, those defaming deities, and depictions of the Indian Tricolour on any clothing below the waist — these are among 20 “locally illegal” markers that Facebook has for content from India.

While the company insists it does not proactively block “locally illegal” content for viewers from specific countries — known as IP-blocking content — unless their legal team deems a local agency request as “valid”, these unreported India-specific guidelines show how the company still directs its moderators to flag such content for further review.

For instance, the company has stated multiple times, publicly and in meetings with the press, that its global policy does not consider speech attacking a religion or belief as hate speech even in India. However, the new findings show how the company still proactively tracks such content in India.

One of the slides in the Facebook document asks: “What is locally illegal content?” The next slide shows a diagram with three phrases: “Content doesn’t violate Facebook policy”, “Respecting local laws when the government actively pursues enforcement”, and “Facebook risks getting blocked in a country, or it’s a legal risk”.

In the section “Operational Guidelines”, the document gives moderators examples of content to flag – maps of Kashmir and Aksai Chain, posts comparing deities divisively or depicting Muhammad, and images replacing the wheel on the Tricolour with Gandhi.

Under the ‘national border’ section, posts that are “supportive” of a separate Kashmir state, of Pakistan’s claim to Kashmir and Saichen, of China’s claim to Aksai Chin, Arunachal Pradesh, Assam, Nagaland, or Tripura, are to be flagged.

At least three slides in the section on “religious extremism” repeat the phrase “humor is not allowed”.

It goes on to say that images of Mohammad may violate Section 295 of the IPC, which prohibits outraging religious feelings by insulting religious beliefs. “Legal is more risk-averse when it comes to litigation over religious imagery… If there is a violating image but the caption or context clearly condemns this defamation, such an image may still be considered offensive and liable to be GEO IP Blocked,” reads one of the guidelines.

The religion category highlights “Defamation of deities”, “Negative remarks of mocking images about religious gods & prophets”, “Comparing deities” and “Calling for new states based on religious community” as among the red flags that moderators have to look out for in posts.

The third and final category of national symbols includes burning, stamping, and writing on the flag or depicting only a portion of the flag.

The documents revealing Facebook’s content moderation arrive on the heels of recently publicised draft amendments to the IT Act, one the main laws governing online content, which would place more liability on companies to proactively take down unlawful content on their platforms.

Facebook’s transparency reports shows how content takedown requests from Indian official agencies have decreased significantly over time. While the country requested the platform to take down around 30,000 pieces of content in 2015, it only did so for 3,000 in 2017. The report states that most of the content that the government wanted taken down were ‘anti-religion’ and ‘anti-state’ posts.

In 2013, India topped the list with the most content takedown requests but now stands at Number 7.

At the same time, globally, the platform has proactively removed an increasing amount of content it considered ‘hate speech’ — from 1.6 million in the last quarter of 2017 to 3 million in the third quarter of 2018.

Increasingly, these takedowns are flagged by the company before they are reported by users. For the third quarter of 2018, more than 50 percent of the hate speech takedowns worldwide were “proactive detection”, meaning they were flagged by the platform before a user reported the content to the company.

A Facebook report states, “The amount of content we flagged increased from around 24% in Q4 2017 because we improved our detection technology and processes to find and flag more content before users reported it.

Leave a Reply

Your email address will not be published.