hate speech – Hate in America https://mystaticsite.com/ News21 investigates how hate is changing a nation Sat, 07 Jul 2018 21:54:34 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.1 https://hateinamerica.news21.com/blog/wp-content/uploads/2018/06/favicon-dark-150x150.jpg hate speech – Hate in America https://mystaticsite.com/ 32 32 When free speech becomes weaponized https://hateinamerica.news21.com/blog/2018/06/27/when-free-speech-becomes-weaponized/ Wed, 27 Jun 2018 22:20:46 +0000 https://hateinamerica.news21.com/blog/?p=1072 PHOENIX — More than 16 million people, many of whom are foreign, passed through the Tom Bradley International […]

The post When free speech becomes weaponized
appeared first on Hate in America.

]]>
PHOENIX — More than 16 million people, many of whom are foreign, passed through the Tom Bradley International Terminal at Los Angeles International Airport in 2017. Most days, they only have to deal with immigration and customs inspectors. But one day last June, they dealt with a small, vocal group of protesters holding signs decrying Islam.

They weren’t targeting their message to lawmakers or activists. Instead, they targeted foreigners — particularly Muslims — who had finished long intercontinental flights from far-flung places.

Was this demonstration a legitimate act of political speech? Was it hate speech?

Many people struggle with the boundary between offensive protest speech and hate speech.

“It is a matter of what the target perceives it to be,” said Phyllis B. Gerstenfeld, a hate crimes expert at California State University, Stanislaus. “One way of thinking about it is, what is the primary intent of the speaker. Is it to affect change or is to harm someone psychologically or verbally?”

Jared Taylor of Oakton, Virginia, who founded a self-described “white advocacy” group that is best known for its “American Renaissance” magazine and website, said that threshold is too low.

“What is hate speech? It’s defined basically as anything that upsets someone,” he said. “Certain facts will offend people. Certain opinions will offend people. If you run your life strictly on the basis on who you might offend, how far are you going to get?”

In 1968, the U.S. Supreme Court weighed in. In Brandenburg v. Ohio, justices ruled that speech isn’t protected if it incites violence. They devised the “imminent lawless action” test: speech must advocate violence and be likely to incite it.

“Incitement” is in the eye of the beholder, especially in today’s heated political climate, free speech experts say.

Some social justice activists and scholars consider upsetting words a form of violence or incitement toward violence, especially with how fast word can spread on the internet, said Benjamin Krueger, who teaches political rhetoric and discourse at Northern Arizona University.

Che Rose of Washington, D.C., is one of those social justice advocates. An expert in online and alt-right “gamer” culture, he said offensive speech might not be immediately harmful, but it makes people think violence toward certain groups is acceptable.

“If people know it’s OK to say something, then they think it’s OK to do it,” Rose said. “They’re pushing norms in a bad direction.”

Taylor disagreed. He said the entire concept of free speech is to protect the right to be controversial, and to offend people.

“We can’t simply decide ‘OK, we have everything figured out, and anything that deviates from what we’ve got figured out is wrong is shut out,’” Taylor said. “That’s the end of progress. That’s the end of any kind of free debate, that’s the end of democracy, that’s the end of the United States, as far as I’m concerned.”

Equating words with violence creates “philosophical problems,” Krueger said, explaining that laws regulating words are content-based restrictions, and are rife with legal complications.

So, those protesters at the airport? Some experts say their message was hateful, but others claim it was a way to bring attention to a controversial cause. Based on the law, it is protected.

But Rose said people who preach hate will be judged through the lens of history.

“When they make the movies about this decade, you’re the bad guy,” he said of the perpetrators of hate.

The post When free speech becomes weaponized
appeared first on Hate in America.

]]>
Can artificial intelligence recognize hate speech? https://hateinamerica.news21.com/blog/2018/06/19/can-artificial-intelligence-recognize-hate-speech/ Tue, 19 Jun 2018 18:46:23 +0000 https://hateinamerica.news21.com/blog/?p=938 BERKELEY, Calif — A group of researchers are fighting online hate speech by teaching computers to recognize it […]

The post Can artificial intelligence recognize hate speech?
appeared first on Hate in America.

]]>
BERKELEY, Calif — A group of researchers are fighting online hate speech by teaching computers to recognize it on social media platforms.

The Online Hate Index project out of the D-Lab at the University of California, Berkeley, in partnership with the Anti-Defamation League, aims to identify hate speech, study its impact, and eventually design a plan to counteract hateful content.

Using artificial intelligence, teams of social scientists and data analysts are working to code programs that can search through thousands of posts looking for malicious content, said Claudia Von Vacano, executive director of digital humanities at Berkeley. Right now, the program correctly identifies about 85 percent of hate speech even though the project is in its early stages.

The software is used in connection with a problem-solving lab of experts, helping companies to navigate the line between protected free speech and content dangerously targeting marginalized groups, Von Vacano said.

The Online Hate Index started in 2012 by Brittan Heller, director of technology and society at the Anti-Defamation League, and Von Vacano.

It began by targeting hate speech on Reddit, the popular web forum. The project then attracted interest from companies such as Google, Twitter, and Facebook, which formed partnerships with the ADL and the D-Lab, and plan to use the online hate index on their platforms, Von Vacano said.

Daniel Kelly, assistant director of policy and programs for the Anti-Defamation League, explained that the ADL began working to fight online hate in 2014, when it released guidelines for companies hoping to limit damage done by extremists online. The Online Hate Index is an innovative project that is designed to target aspects of online hate that have been overlooked by similar studies, he said.

“What we are doing is using machine learning and social science to understand hate speech in a new way,” Kelly said. “We are taking it from the perspective of targets of hate online.”

Kelly said that the project aims to be transparent by lifting the “black veil,” when it comes to data and analytics from social media companies. Many companies keep their data and statistics private when it comes to terms of service and user policies. One of his main concerns about data coming from these companies, is that the ADL and D-Lab don’t know if these policies incorporate the perspectives of marginalized groups who are affected by them.

Both the D- Lab and ADL recruited members for their research teams with diverse perspectives and backgrounds, including varying ethnicities, genders, academic fields, and perspectives, said Von Vacano, who is also in charge of recruitment tor the Online Hate Index project.

“Our linguist, for example, is delving deeper into issues of threat,” Von Vacano said.

One of the largest challenges faced by the teams was defining the intensity of statements made by Reddit users, Von Vacano said, as hate speech is not clearly defined. To solve this problem, the ADL and D-Lab use a scale to characterize posts. At the first degree of biased posts, someone might hint at hateful opinions. Next, hateful content may become dehumanizing to a whole class of people. The most extreme examples of online hate are direct threats to individuals. Examples of online threats include doxing, where people with malicious intent publish information, like a home address or phone number, that puts someone in harms way and leaves them vulnerable for unwanted attention or visitors.

“Going into the project, we kind of naïvely thought that we could ingest large amounts of text and, at the other end, say on a binary level ‘this is hate… this is not hate,’ “Von Vacano said.
“At this point, we have a much more sophisticated understanding of hate speech as a linguistic phenomenon, and we are really dissecting hate speech as a construct with multiple components.”

In February 2018, the first stage of the project was completed, and more information can be found on the ADL’s website. Phase two is scheduled to be released in July.

The post Can artificial intelligence recognize hate speech?
appeared first on Hate in America.

]]>