With online hate speech on the rise, here’s what some countries are doing to fight back.
Countries are battling online hate speech, even if service providers and private companies have their hands tied.
Social media is one of the most powerful vehicles for communicating ideas online, but there is a lack of safeguards in place when it comes to moderating and preventing online hate speech.
Basically, people are free to disrespect each other without consequence, but should there be rules and penalties when those negative posts result in direct acts of violence or discrimination?
That’s a question that nations are now attempting to answer. In the wake of the mosque attacks in Christchurch, New Zealand that were live-streamed on Facebook, governments are trying to educate lawmakers about the power of online hate and develop ways to better govern the digital domain.
THE UNITED KINGDOM
In April, the United Kingdom published a 100-page white paper exploring possible ways that the UK government could impose new regulations against online hate.
The Secretary of State for Digital, Culture, Media & Sport even opened-up the issue for public debate. The UK wants to create an independent regulatory body and give that entity the power to regulate online posts that might be deemed anti-social or outright criminal.
Take Austria for example. Recently, Austrian authorities have developed a pilot program called “Dialogue Instead of Hate”. Without criminalizing online hate speech, this program serves to promote better moral qualities and responsible posting.
When the Austrian authorities believe that a person is guilty of incitement to hatred and violence, that person can be placed in a re-programming treatment that encourages them to exercise moral restraint while online.
Canada has taken proactive measures to get a better grasp of the issue of online hate speech. On April 11th, religious and civic leaders were invited to join the Canadian government’s House of Commons Standing Committee on Justice and Human Rights to figure out what to do about the rise in online hate speech.
The committee recognized the difficulties of policing posts and comments made online while highlighting the need to hold people accountable when their online activity has dire consequences in the real world.
One of the biggest takeaways from that meeting was the importance of creating a moral framework for online interactions.
THE UNITED STATES
The US government is reluctant to take any direct action against online hate speech. Where other nations have succeeded in at least creating a dialog, the United States has fallen back to some of its same old rhetoric – freedom of speech trumps everything. (pun intended)
After all, freedom of speech is a basic constitutional right in the US, even if that speech is unpopular or offensive. Americans are protected against censorship by the First Amendment.
That puts the power in the hands of private companies to self-regulate. Facebook and Twitter are media outlets and can control what type of content is posted on their sites.
Yet, even that would anger some American politicians. Brendan Carr is a Republican commissioner at the Federal Communications Commission, and he took to Twitter to rip Facebook’s attempt to have governments regulate online hate speech.
Facebook says it’s taking heat for the mistakes it makes in moderating content. So it calls for the government to police your speech for it.
Outsourcing censorship to the government is not just a bad idea, it would violate the First Amendment.
I’m a no.https://t.co/1q5k1OxS42
— Brendan Carr (@BrendanCarrFCC) March 31, 2019
The lack of government response doesn’t reflect the sentiment of the American public.
In a survey conducted by the Anti-Defamation League in December of 2018 found that 58% of Americans believe that online hate and harassment are making hate crimes more common.
80 percent of participating Americans want their government to take more legislative action and 67 percent want private companies to make it easier to report hate crimes online.
The Private Sector
Facebook and Twitter are the largest forums for online hate speech, simply because of their popularity.
How do you police and monitor the moral fabric of such an astronomical number of users?
Facebook executives have promised to develop new ways of monitoring posts that could be considered violent and incendiary, but the difficulty lies in determining what type of posts are completely harmless and genuine and what type of posts are created for the sole purpose of spreading hatred.
Mark Zuckerberg wants to stem the tide of online hate speech on his social media platforms, but even he admits that there are limits to what he can do.
At the end of March, Zuckerberg penned an op ed for the Washington Post that mainly put the responsibility on governments for regulating online hate speech.
He wrote, “I believe we need a more active role for governments and regulators.” Facebook has the sticky issues of privacy and censorship to consider, which makes this task an unenviable one in regard to their bottom line.
Governments don’t have that problem. They don’t have advertising partners to consider when trying to tackle the problem of online hate speech. Yet, this is certainly muddy water.
Most people who make hateful comments or posts online do so under the veil of anonymity or just out of spontaneous bursts of emotion.
In just a few seconds, a hateful diatribe is posted, shared and kicked around by A.I. bots until that message has been magnified a million-fold.
Somehow, it seems that the same moral restraint people have about shouting a racial slur amid a crowded coffee shop doesn’t hold true online. The moral filters are weaker when there is a screen in place.
Most people would agree that they wouldn’t make these types of comments in the real world during a face-to-face encounter with another human being.
In times past, these types of hateful inflections were whispered behind the hand or discussed in private. Yet, the internet makes everything you post available to millions of people, instantly exposing vile rhetoric and amplifying it like never before.
Nations of the world seem to support the idea that morality needs an upgrade for the digital domain. Yet, Stephen Glicksman, Ph.D. doesn’t agree.
He’s a developmental psychologist and Director of Clinical Innovation with Makor Disability Services in New York.
According to Dr. Glicksman, “The whole point of moral rules is that they are universal and generalizable across contexts (they are true everywhere), not alterable by consensus (they don't change because we decide to change them), and not contingent on the presence of a rule (we don't need a formal rule to tell us we can't do something that's immoral). That's what makes them moral rules.”
He goes on to say that “there is no difference between moral rules online and moral rules in the real world.
The anonymity of the internet might make it easier for people to behave immorally, much the same way that the economic system and racial theories of the South in the 19th century made slavery common. But that doesn't mean that slavery or cyber-bullying is any more or less immoral depending on when or where or how you engage in it.”
The End of Free Expression Online?
Remarkably, there are people who feel that any type of regulation or censorship of online hate speech would lead to the end of free expression online. Nations like Austria, the UK and Canada are listening, but they’ve clearly decided to move forward with creating new frameworks for regulating online morality.
The United States isn’t leading from behind on this issue; the US is just behind. Lawmakers in the US have put the responsibility entirely on the shoulders of the private sector.
Some nations are treading lightly, but others are taking more drastic measures.
During the Easter terrorist attacks in Sri Lanka, their government imposed an all-encompassing blackout of Facebook to block any attempt at misusing the social sharing platform to promote violence. India has also taken draconian measures against WhatsApp to curb the rise of fake news during its election.
There is a dark path ahead in policing online morality. There are general and universal rules of morality that should extend to online platforms, but even people with moderate viewpoints can unwittingly pass along something that could be viewed as the promotion of violence and hatred.
So, finding a single person to blame will be tricky. Perhaps the best way to tackle the issue of online hate speech is to remove some of the anonymity, so that people have to own up to their words.