Tech leaders aren’t taking responsibility for the violence they’re creating, and so our online divisions will continue to grow.
Hate speech, intolerance, political divide — our societal problems are mirrored and multiplied online. Social networking sites like Twitter and YouTube and Facebook are the face of these issues, but these problems won’t go away when they do.
People in all nations and across political divides are demanding answers. Most of the solutions are inadequate to the problem. Let me explain why this is so difficult and what will really make things better.
What governments want
Why not pass laws requiring these companies to clean up their acts? It’s been done, and it’s not great. Well intentioned hate speech laws like Germany’s NetzDG force companies to be censors with little oversight or ability to appeal decisions. Russia, China, Vietnam, and Pakistan have passed laws that give the government broad powers to punish people who spread fake news, distribute “homosexual propaganda,” or “disrupt the social order” — blanket laws that let them shut down any speech they dislike. History shows that narrow laws work somewhat well — like protecting health information or video rental histories — but broad laws are ripe for abuse. If you’re looking for governments to save you, look elsewhere.
What companies want
Companies like Twitter and Facebook have made their own proposals about what to do, but those proposals are self-serving and would increase their market dominance. For example, Facebook’s proposed standards — Facebook meets them already, and other companies would struggle to match them. Twitter wants more transparency and competition. But if you’re already a Twitter user, there’s little transparency or competition that would convince you to leave. Changes like these would reduce competition — increasing the monopolies these companies have over our social spaces and taxing small competitors with huge moderation burdens.
Hire more people
A popular refrain is that these companies don’t want enforcement because it will cost them a lot of money. It’s less important than you think. These companies want to build trust with their users, and that means investing in solid enforcement. Hiring another 500 engineers or 10,000 reviewers is pennies compared to the user goodwill and healthy environment they create. And there are limits on the benefit of that next engineer or next moderator…
Why haven’t tech companies been able to solve these problems with better technology? If it was easy, they would have solved it already. For example, there’s a lot of hate content that’s pretty easy to catch — like catching the n word in videos or swastiskas in images. But is the n word part of a comedy act or a racial slur? Or the swastiska part of a white supremacist video or an history lesson? Even worse, these are easily obfuscated — a bleep over the n word or visual static on a swastiska can cause a computer to miss this stuff. And that’s just the tip of the iceberg of content that computers can’t catch and can’t make decisions about.
If tech isn’t the solution, how about hiring more moderators? Two problems. First, if you want humans to review every piece of content, you’ll need to hire millions of people — it’s not feasible, and think of the privacy implications that everything you share will be seen by a stranger. Second, people are fallible. People will make mistakes, and different people will make different decisions about the same piece of content. An naïve example is your beach bikini video — totally fine to the moderator from the US but not fine to the moderator from a Muslim country. There are infinite examples like that. You can’t depend on people to make perfect decisions every time — or even the same decisions you’d make if you were reviewing it yourself.
Managers can’t move the company
These companies are full of managers and others who could make decisions and push their companies into healthier territory, right? Nope. Zuck has made it clear that there’s bad behavior on his site but not much over the line. Facebook employees are unhappy — speaking out about their feelings. The impact? Not much. The managers and other employees can be as angry as they want. They’re just the crew on the SS Facebook. Someone else is at the helm.
What about the users?
If people are really concerned about hate speech on social media sites, their behavior isn’t showing it. Twitter and Facebook active users keep growing. YouTube has 2 billion monthly users. Even though some conservatives are abandoning Twitter for Parler, Parler has already revisited their content policies based on what their users are sharing. Even though users are vocal that these companies need to change, everyone else seems content to scroll through their friends’ status updates and silly dog videos.
Principles and leadership
If Mark Z really wanted to remove hate content from Facebook, he could order his company to do it, and they would. If Jack wanted to lower the heat that Twitter generates, Twitter could do it. There would be costs in lost users and revenue, but they’ll get over it.
But they won’t because that’s not what their CEOs want.
The best way I can explain this is by talking about wearing a mask in the US. People take their cues from the people they respect, people “above” them. If Donald Trump stressed the importance of wearing a mask, and if religious leaders insisted their parishioners wear masks, more people would wear masks. If scientists insisted that masks are useless and shouldn’t be worn, a whole lot of people would stop wearing them.
Companies work in a similar way. People at the top set the agenda, and the rest of the company follows. If Zuck says he wants Facebook to be a hate-free site, people in his company would make decisions to make that happen.
But he’s not doing that.
As long as Zuck and Dorsey and other tech leaders keep steering their ships in the same direction, things won’t improve.
Without a change in leadership — a change in the principles that underlie a company’s decisions — those companies keep on doing what they’re doing.
Nothing will get better until the people at the top change.
So what might change them?
The “punch a bunch of people” button
Social media companies express their missions as enabling conversation, community, and voice. But not all voice is equal, especially violence.
And when we talk about hate speech, we’re really talking about violence between people. Let me explain.
If someone punched you in the face, you’d sue them. There might be an arrest and a trial. Jail time. Fines. As a society, we disincentivize violence with punishment.
We don’t do that for verbal attacks. We tell people to grow a thicker skin. Ignore them. As a society, we consider this part of growing up.
Here’s the catch: your body doesn’t know the difference between a verbal attack and physical attack.
If someone throws a punch at you, your body responds instinctively — pumping you full of cortisol and adrenaline, putting you into “fight or flight or freeze” mode.
The same thing happens when your identity is threatened — like if you’re pro-life and read a pro-choice screed, or if you’re pro-mask trying to convince an anti-masker to wear one. Your brain responds unconsciously — increasing your stress level, getting you ready for that fight.
Facebook, Twitter, and YouTube — if those apps really punched you in the face, there would be lawsuits and regulations. People would stop using them. And they’d introduce tech changes to stop throwing punches.
In reality, these apps let you throw virtual punches at people — in tweets, images, and videos. Your body doesn’t know the difference between being punched in the face and reading a status update that you disagree with. But our laws — and hell, our society — think that they’re different. As long as that we consider these attacks to be different, no solution will be adequate.
Which is why we have to depend on people of principle. People who recognize the violence they’re enabling. People who commit to creating healthier communities and countries. People who lead their companies to solving those problems.
The change has to come from the inside, not the outside. And it has to come from the top because their users, employees, and even governments can’t make these ships get on the right course.
We need the leaders of these tech companies to commit to creating non-violent spaces online.
No community, no society can survive so much violence.
Look, these issues are never going away. Technology enables hate speech in new ways, but hate speech is not unique to technology. There’s no “solution” that will get rid of hate speech altogether. Hell, there’s no solution that can get rid of just the stuff you hate and keep the stuff you like.
But not all is hopeless. The leaders of these companies can take a stand — and they can start by realizing that they’re responsible for the punches thrown on their platforms. Today, these leaders aren’t taking responsibility for their role in the violence that’s disrupting our society.
If you care about these issues, go work for those companies and help them stop online abuse. These are difficult problems, and those companies need passionate people to address them. You can put a dent in some of the problems out there.
And if you found all this interesting, I recommend that you learn more. For the political angle, try Why We’re Polarized by Ezra Klein. For the psychological angle, The Righteous Mind by Jonathan Haidt is your book. If you want the comic point-of-view, Red State, Blue State by Colin Quinn is worth your hour and five minutes.
Damn, this is a tough problem.