Brittan Heller doesn't know quite what caused it.
Maybe she turned a man down for a date too quickly, bruising his pride. Maybe she just bothered him in some way. Whatever it was, Heller inadvertently unleashed waves of attacks from a fellow Yale law student when she did whatever she did a decade ago.
Back then Facebook didn't have the reach it currently has. So Heller's tormentor raised an online mob on AutoAdmit.com, a message board for law students and lawyers. Soon, posts appeared accusing her of using drugs and of trading sexual favors for admission to the elite school.
That sucked her into a larger maelstrom raging on the message board. Other female students at Yale were being accused of sleeping with professors to get better grades. Behind pseudonyms, some posters said they hoped the women would be raped.
Often, this is where the story ends. The women, harassed and degraded, close their accounts or drop out of school, anything to put distance between themselves and the anonymous hatred.
Heller, now a lawyer for the Anti-Defamation League, and her peers chose to fight, suing AutoAdmit to reveal the names of their harassers. They eventually settled. The terms of the settlement are confidential, Heller says, but the experience set her on the path toward a career fighting hate speech.
"My work would be a success if no one ever needed me," Heller says. But so far, it's the opposite. "We're in a growth industry."
Hate is everywhere these days. It's hurled at people of different skin colors, religions and sexual orientations. It isn't limited by political view; it's not hard to find hateful words and acts on the left and the right. And it takes place everywhere: airports, shopping malls and, of course, on the internet.
Hate groups have taken up residence online. The hateful meet up with like-minded gangs on sites like Reddit, Voat and 4Chan, terrorizing people they don't like or agree with. Because much of the internet is public, the medium magnifies the hateful messages as it distributes them.
The ADL a civil rights group, found that about 1,600 online accounts were responsible for the 68 percent of the roughly 19,000 anti-Semitic tweets targeting Jewish journalists between August 2015 and July 2016. During the same period, 2.6 million anti-Jewish tweets may have been viewed as many as 10 billion times, the ADL says.
It would be bad enough if digital hate stayed locked up online. But it doesn't. It feeds real-world violence. In May, a University of Maryland student who reportedly belonged to a Facebook page where white supremacists shared memes was arrested in the stabbing death of a black Army lieutenant. A few days later, a man who had reportedly posted Nazi imagery and white nationalist ideology to his Facebook pagewent on a stabbing spree in Portland, Oregon, after threatening two women, one of whom was wearing a Muslim head dress. Two Good Samaritans were killed. The man who opened fire on a Republican representatives baseball practice was reportedly a member of Facebook groups with names such as "The Road to Hell Is Paved with Republicans" and "Terminate the Republican Party."
And that doesn't count the garden variety taunts people get because of how they look, or the bomb threats or vandalized cemeteries.
The legal response has varied from place to place. In the US, where freedom of speech includes the expression of hate, activists are pushing lawmakers to draw a line at harassment, and treat it the same whether it's in real life or over the internet.
In other countries, like Germany, where hate speech that includes inciting or threatening violence is already outlawed, the government is working with social networks like Facebook and Twitter to ensure enforcement. Last month, Germany passed a law that could fine social media companies more than $50 million if they fail to remove or block criminally offensive comments within 24 hours.
So far, tech has proved ineffective at curbing online hate speech, and that's not just because of the internet's reach and anonymity. Take today's tools that automatically flag derogatory words or phrases. Humans get around them through simple code words and symbols, like a digital secret handshake. So instead of the slur "kike" for Jew, they write "skype." The smear "spics" for Hispanics becomes "yahoos," "skittles" stands for Muslims (a reference to Donald Trump Jr.'s infamous comparison of the candy to Syrian refugees) and "google" stands for the N-word.
Now tech companies, activists and educators are devising new approaches and tools that, for instance, hide toxic comments, identify who we are and verify the content we see, or make us stop and think before we post. They're also experimenting with virtual reality, potentially putting us in the shoes of a victim.
Their goal: to encourage civility, empathy and understanding. "It's not impossible," says Caroline Sinders, a Wikimedia product analyst and online harassment researcher. "It's fixable."
What form that fix will take is anyone's guess. This problem, after all, has existed since before the internet was even a thing. And right now most efforts to curb online hate are in their early stages. Some may show promise, but none appears to be the answer.
"It's going to be a combination of different approaches," says Randi Lee Harper, a coder who founded the Online Abuse Prevention Initiative after being targeted by online hate mobs.
Would you take back awful comments if you could? That's the notion behind Hate Free, an app that scans emails and status updates for hate speech, creating an extra step that asks people to think before pressing Send.
Another idea: Use AI to stop the vitriol before it gets published. Alphabet's Jigsaw group is working on just such an approach with its Perspective software. Available for free to websites and blogs, the program evaluates a comment's potential impact on a conversation, scores the post's toxicity level and decides whether to allow it to be published. The New York Times is an early adopter.
The Washington Post meanwhile is now using computer programs to moderate comments. The computers were trained by the Post's years-long records kept by human moderators. But it only handles "rote work," the newspaper said. Stickier comments are still judged by humans.
"This technology not only helps foster healthier comment sections, but will make it easier for journalists to find and interact with the highest quality commenters," said Greg Barber, director of digital news projects at The Post.
Computers can't do the whole job, because they're notoriously bad at understanding nuance, a problem heightened by the coded language of hate speech. Wouldn't it help if we all acted as community police? That's the aim of Civil, a Portland startup whose software helps teams manage comments sections on media and consumer websites.
It's called Civil Comments, and it works by forcing you to rate three people's posts for civility before you can submit yours. Wired called it the "online equivalent of taking 10 deep breaths before picking a fight." AI and other computer techniques then score the ratings to make sure no one can cheat the system.
"This is a human problem and the solution has to be human in large part," says Christa Mrgan (not a typo), Civil's co-founder and vice president of design.
We know you
Chris Ciabarra sees internet-spawned hate as an extreme problem requiring an extreme response.
That's why his Austin, Texas, startup Authenticated Reality plans to create "The New Internet." Think of it as a completely new web browser that verifies the people who use it and the content it serves up. No more fake news. No anonymous postings.
Everything you do on the browser is tied to a profile that's been verified by a driver's license or passport. You can surf any website you want, of course. But you'll find more-reliable websites as well, all verified through his company and tied to actual people. The service attempts to end the culture of anonymity that's made the worst parts of the internet possible.
The company's bold idea is even baked into its website's name: TheNewInternet.com.
"It's the Wild West. That's the problem," says Ciabarra, Authenticated Reality's co-founder and chief technology officer. With Authenticated Reality, "you're putting your reputation on the line."
Ciabarra says he'd be happy if 1 percent of the internet joins his service, but he thinks 90 percent of us would want to. Still in testing, the service is free for now. It will eventually cost about $20 a year.
The big guns
Facebook and Twitter will have to play a much larger role for any real change to occur.
Many people associate Twitter's social network with anonymous, hate-spewing trolls. Former Breitbart editor Milo Yiannopoulos, for example, used Twitter to attack comedian Leslie Jones for appearing in the "Ghostbusters" remake, delighting other people who considered the all-female production a bow to political correctness. (Yiannopoulos had agreed to an interview with CNET but canceled.) And it's where he attacked feminist Anita Sarkeesian and developers Zoe Quinn and Brianna Wu for complaining about the video-game industry's treatment of women, inflaming a controversy that came to be known as #GamerGate.
As a result of these and other high-profile harassment campaigns, Twitter is trying to shield victims by hiding offensive tweets and making it easier to report attacks. The company also shut down 376,890 accounts in the last six months of 2016 as part of its effort to fight "violent extremism."
Facebook pays more than 7,500 people to monitor what people post, including violent videos and graphic imagery. The company is also investing in counter-speech, effectively highlighting positive commentary to drown out the negative. Some people, for example, may have seen anti-Muslim posts in their news feed surrounded by posts and news stories about Muslims raising money to clean up desecrated Jewish cemeteries. Facebook also programmed its computers so people can't create groups with hateful terms in their name anymore.
Advocates applaud Facebook's efforts but say they're long overdue. Twitter and Facebook declined to make any executives available for comment.
In 2007, Heller, the Yale law student, filed a federal lawsuit demanding AutoAdmit identify her tormentors. That eventually put her face to face with some of them. She was astonished to learn most of them had never met her or had even gone to the same school. They were men and women; professionals and blue-collar workers; young and old.
What they had in common was an empathy gap.
"Hate comes from everywhere on the spectrum, it's not exclusively owned by one party," Heller said. "The theme with all of them was that they said, 'I didn't realize what I wrote affected a real person.'"
After law school, Heller investigated and prosecuted cybercrime and human rights violations at the US Department of Justice and the International Criminal Court in The Hague. Last September, she joined the ADL as an on-the-ground liaison to the tech industry.
In that role, she works with tech companies on virtual reality. Her hope is that VR's immersive experience can present the world through other people's eyes. The technology, she thinks, might help close the empathy gap.
Heller is also heading the ADL's Silicon Valley command center, which tracks, analyzes and fights cyberhate aimed at African-Americans, Muslims, Jews and the LGBTQ community. She's also helping Twitter curb the awfulness that flourishes on it.
In the spirit of the Valley, the ADL funded a hackathon called Innovate Against Hate, which accepted submissions through March and will pick a winner later this year. The person or group with the most creative concept for curbing abuse will win a $35,000 first prize.
"The internet's not inherently good, it's not inherently bad," she says. "It reflects the intentions of the people who use it."
With any luck, those intentions can be changed.