Facebook Inc will remove or restrict users’ access to content if it becomes necessary to reduce adverse legal or regulatory impact on itself, the social media company said on Tuesday.
Its updated Terms of Service, effective from October 1, has however raised alarm bells among internet and free speech advocates, who called it troubling, vague, and against the Supreme Court’s judgment on online speech and intermediary liability.
“This global update provides more flexibility for us to change our services, including in Australia, to continue to operate and support our users in response to potential regulation or legal action,” a Facebook spokesperson said in an email response.
A Facebook source said update is not related to content moderation and it has made this change to enable it to stop allowing publishers and people in Australia from sharing local and international news content on Facebook and Instagram. Facebook is currently entangled in a regulatory battle in Australia regarding a proposed law to pay news publishers.
However, technology lawyers were worried. They said the update gives Facebook censorship rights and curtails free speech of users. This change is applicable globally to Facebook. Instagram’s terms will reflect similar changes.
“The Shreya Singhal judgment (2015) clearly states that intermediaries can take down content only with a judicial order or a notice from a government agency. Proactive censorship is not permissible, and intermediaries should not determine the potential legality of content,” Vrinda Bhandari, a Delhi-based lawyer specializing in digital rights and privacy, said.
This will lead to private censorship where intermediaries will function with minimal accountability and transparency, Bhandari said. The update is also contrary to Facebook CEO Mark Zuckerberg’s public statements supporting free speech on the platform.
“We also can remove or restrict access to your content, services or information if we determine that doing so is reasonably necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook,” the smartphone notification sent to users on Tuesday read.
The updated terms will allow Facebook to moderate content to maintain its business objectives in a changing regulatory environment.
“This development is incredibly concerning because earlier Facebook had maintained that it is not actively policing content on the platform. By becoming a censor on issues that can impact Facebook or are about Facebook’s negative impact, will reduce the scope of the conversation,” said Apar Gupta, executive director of Internet Freedom Foundation, a digital rights organization.
The changes in its terms of service come despite growing criticism of Facebook’s content moderation policy that has been allegedly influenced by its employees’ political leanings and the company’s business considerations globally and India. Facebook is currently under the spotlight in India for not pulling down alleged hateful and anti-Muslim content posted by BJP legislators on its platform.
Facebook has opposed changes to India’s intermediary liability rules, which will change the way social media companies monitor and take down content on the request of law enforcement agencies. It has also pushed back on tracing its instant messaging platform WhatsApp’s messages to the originator.