I have mixed feelings about online censorship. In general, I am huge proponent of intellectual diversity and being educated on opposing viewpoints. I believe it is impossible to hold a well-informed opinion if you are not well-informed about the opposing viewpoint; you cannot learn to think for yourself and form your own opinions if you are never exposed to different opinions.
At the same time, it is naive to think that everyone who puts their opinions out into the public do so earnestly, out of a genuine desire to articulate their beliefs. People who spread hate speech do so out of maliciousness; their intent is to cause distress and sow discord. Hate speech does not promote intellectual diversity; instead of promoting growth and constructive discussion, it builds walls and tears down people.
However, although I have mixed feelings on the subject of online censorship in general, I have more well-defined positions on more specific questions relating to online censorship.
Is it ethical for companies to remove dissenting opinions for governments? Is it ethical for companies to remove information that does not promote or share their interests or political beliefs?
To me, these two questions are asking the same thing–is it ethical for companies to remove non-harmful content if they or someone else disagrees with it? While I disagree with this censorship of different beliefs and believe that it stunts the development of society, I would not go so far as to say it is unethical, depending on the purpose and mission of the company. For news organizations, their mission is to inform the public about current events. If a staff writer pens an article with a different ideology than that of the organization, but still only reporting true facts, I do think it would unethical to fire that writer.
However, there are many other companies, including social media sites, that are not news organizations and do not proclaim their mission to inform the public. If a pro-life Christian forum removed a post for advocating the benefits of contraceptives or abortion, I don’t believe many would call the forum moderators unethical, whether or not they agree with their beliefs. Likewise, if a feminist forum removed a post that denied the existence of rape culture, it is hard to see how anyone could claim that is unethical. In neither of those cases did the forums claim to be unbiased–or even if they don’t explicitly state that fact, it’s still pretty clear that they are not.
This brings me to my main point–it is not unethical for companies to remove opinions that they, or someone else like the government, disagree with, but it is unethical for companies to assert themselves as an unbiased source, and then go on to censor dissenting viewpoints. While Google does censor search results in China, in order to be allowed to provide services in that country, they do not pretend otherwise. If Facebook is censoring user posts, that in and of itself is not unethical. However, as Reem Suleiman said, “if Facebook is making decisions about how news reaches the public, then it needs to be transparent about how those decisions are made.” Facebook cannot pretend to be “a place where people feel free to express themselves within reason” while censoring people’s expression.
Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?
I believe it is ethical for companies to remove information broadcasted by terrorism organizations. If the information is broadcast by terrorists, then they want the public to hear that message and to spread it is to aid them. However, I won’t go quite as far as to say that it is unethical for companies to not remove information broadcast by terrorism organizations, as this goes into the territory of hearing opposing viewpoints in order to better understand why you hold your own beliefs.
However, information about terrorism organizations is more of a grey area, and I think is a question that can be better answered by individual companies themselves, according to their company mission. Facebook firmly states “There is no place for terrorists on Facebook,” while Twitter is slightly more lax, in order avoid censorship. I don’t think one option is more ethical than the other, given all the complexities and nuances of freedom of speech and freedom of information.
Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users?
There is no question for me on this; it is always ethical for companies to remove hate speech generated by their users. As I already mentioned above, hate speech is generated not by a desire to create open discussion and the sharing of beliefs, but out of malicious intent and a desire to cause distress.
That being said, actually being able to discriminate hate speech from unsavory opinion is a difficult task. Discriminatory or provocative content might not necessarily be hate speech; perhaps the person proclaiming these opinions is truly ignorant. Removing this content if it is not clearly hate speech may eliminate an opportunity to educate this person, and lead to the creation of echo chambers. The person whose content is removed will instead move to other sites filled with people sharing their same opinion, and the site that removed the content will also become more homogenous in their content.
Content needs to be reviewed carefully and thoughtfully to determine whether it is hate speech. Tech companies are doing their best to practice this, but this is definitely a difficult task.