This is the second part of a two-part series focusing on internet freedom and internet censorship on a global scale. Click here to read part one.
Netsweeper and its activities are only one aspect of a wider conversation about internet freedom. Many internet activists believe that ethically dubious algorithms, social media policies, and the dismantling of net neutrality protections have been threatening free speech domestically. Tech companies have a professed commitment to the sharing of ideas and very democratic-sounding principles, but heightened press scrutiny in recent years has revealed that these ideals aren’t always reflected in company practices.
Related: Danger: Brand At Risk
Ramesh Srinivasan, co-author of the book “After the Internet,” says that social media platforms claim to have a commitment to the open sharing of democratic viewpoints, but their algorithms determine what new viewpoints are visible and what viewpoints are invisible. In some cases, a series of clicks could take users down an algorithmic rabbit hole, which ultimately brings fringe content into the mainstream.
Many figures on the alt-right have been banned from social media platforms and have also alleged other forms of censorship, including shadow banning. From their perspective, they’re being actively discriminated against, even though from other people’s perspective, they’re espousing discriminatory rhetoric. Although we obviously have freedom of speech enshrined in law, it’s easy to forget that Twitter, Facebook and other platforms make the rules.
Although ethical rule-making isn’t always obvious, Srinivasan argued that tech companies should consider the societal context when determining which types of speech to prohibit or discourage. “If it is the most vulnerable people in the society that are being targeted by that hate speech then, in a sense, that hate speech counts that much more because there’s a harm that it can influence,” he said.
Speaking at the European Parliament, Facebook CEO Mark Zuckerberg stated, “We have never and will not make decisions about what content is allowed, or how we do ranking, on the basis of a political orientation.”
Facebook is currently developing AI systems to proactively review new content in the system. Zuckerberg said that the system is moving from a state of “reactive management” where people in the community flag things to one in which AI systems are proactively evaluating and flagging content for tens of thousands of people to review.
“I think brands/content creators desire and often pay for the internet to be unimpeded,” Eric Black, a technology consultant, told DMN. “However, the world is digitally mutating so fast that it’s simply too difficult to keep up.
‘A tsunami of data, policies and people… how could one possibly understand the full reach of their deployed content and context?”