Diving into Content Moderation and Free Speech

Boy, are we glad that we don’t have to live in the Stone Age. Can you imagine world without the internet? The means of communications at such times used to be highly unreliable and would take months to reach the other person if it even survived the journey. Thank Vinton Cerf and Bob Kahn for the invention of the internet, or we’d still be penning letters to contact people. All in all, communication has been immensely revolutionized. However, with the advent of social media in our lives, the concept of free speech and content moderation comes to light.

Sure, social media has allowed us to stay in touch with the rest of the world through various means; however, it’s not like there haven’t been any adverse side effects. Law coursework help would like to point out that recently, people have become more vocal and borderline extremist over the slightest of inconveniences.

Let’s have a dive deeper into this.

Social media giants

You’re probably aware of the four giants of the social media world; Twitter, Facebook, Instagram and YouTube. Currently, Facebook has more than 2.85 billion active users; it’s practically become a religion itself. Long gone are the times when parents used to teach their children not to talk to strangers. Ironically, through social media, you can establish a sort of kinship you haven’t found around yourself.

However, with the amount of personal detail we’re sharing on these platforms while trusting them blindly, every day, we’re faced with a newsflash of why we shouldn’t be. For example, the New York Times published a piece that stated how Facebook’s top executives ignored all signs of Russian Interference in 2016 and went as far as to hide them from the public eye. It’d be like tobacco companies turning a blind eye to the fact that smoking is responsible for killing people through lung cancer.

Cultivating cult-like behaviors

Besides connecting people, the internet has also given birth to severe issues like doxxing, cyberstalking and swatting. All of them involve threatening a person by revealing their confidential information for the whole world to misuse. For example, in North Carolina, a man was arrested in a pizza shop for brandishing a gun and claiming that he’s taking their fight from the internet to the real world. The frequency of situations like these has caused alarms to go off because people are being threatened within the vicinity of their safe confines, which are their houses.

Social media has even spurred violence in countries like Sri Lanka and India to the point that the government had to ban various platforms because they were instrumental in users colluding over hate speech and racial insults. A 32-year-old man was mob-lynched in southern India because he was allegedly involved in a child kidnapping as rumored through a chain of WhatsApp messages.

Dabbling with the gray area

It is not news that these social media platforms have been shaping our lives and events as of late. They have been allowing people to spread false news, and when such tech companies are flagged and confronted to remove the content, they always manage to find a way out of it under the guise of “free speech”. Using The First Amendment and Communications Decency Act (CDA), these tech companies manage to get away with most confrontations.

Section 230 of CDA declared, “Websites that rely on user-generated content cannot be treated like publishers”, and that was all any of these social media platforms needed to defend themselves from the tweets or posts shared on their websites. So this law makes it impossible for anyone to sue them, for they’re not supposed to be accountable for them. They’re just bulletin boards for people to post their stuff on them.

How content moderation works

On the other side of the story, though, CDA Section 230 allows social media platforms to moderate the content that gets displayed, which is pretty much like saying these platforms are newsstands and the only thing they can do is either take down stuff or arrange it. So, in short, they’re basically curating content. However, you’d be surprised about the amount of data that gets uploaded and needs to be moderated. YouTube alone receives 400 hours of videos that are uploaded every single minute.

CDA Section 230 also published the Good Samaritan Clause, which gives tech companies the right to take down content they find objectionable, and you can’t sue them as they’re doing it for the good of the people. One example of this can be the Palestine issue. Any mention of the country in any Facebook or Instagram post was screened and consequently removed from the platforms. Before you knew all about this, you probably assumed that content moderation takes place automatically. But in fact, 150,000 content moderators are working worldwide.

The good, the bad and the ugly

One of the most concerning aspects is that these content moderators don’t necessarily distinguish between good content and bad. Rather, they study the engagement metrics generated by each content. Because, understandably, the most controversial content is likely to garner the most attention. For instance, a video of a kid getting beaten was shared more than 44,000 times which might be heinous, but you can’t deny such type of content gets people enraged and gets them talking about it, and the engagements are exactly what these platforms thrive on.

The fact that Facebook said that it wants to establish its own Supreme Court to decipher the acceptable content is questionable and alarming. So far, they’ve been exceptional at covering up incidents by carrying out blackouts on major issues like Palestine and Israel and influencing the election waves. So, imagine what they’re going to do if they manage to get their hands on a Supreme Court of their own at this point.

Law coursework writers would say that social media and their clutches need to be curtailed. Clearly, new laws need to be sanctioned to withdraw their focus from engagements through a reality check.

 

Leave a Comment