We need HUMAN INTELLIGENCE to do so
Nineteen of these individuals had either killed or tried to kill someone. These were people who were brought up to believe that misogyny, bigotry and hate were holy and commendable. They genuinely believed in the rightness of their actions.
The people I interviewed included members of Wahhabism, Hizb ut-Tahrir, Jehovah Witness, Hasidism, Church of Scientology, the Fundamentalist Church of Jesus Christ of Latter-Day Saints. The common point was that each of these individuals changed their attitudes – and did so independently.
I wondered how these people who grew up in such highly conformist settings realized they were wrong. Most of these people were sealed from the outside world. They had no internet, TV, secular high school or college education. Additionally, most lost their families and friends when they rebelled. Each, in some way, was punished by their societies.
On the side, I researched more than 1000 people in similar situations recruited from personal acquaintances, social media, podcasts,videos, magazine and journal essays, ezines and blogs, forums and chats, academic journals, book autobiographies and memoirs – all first-hand accounts. I thought if we detected why and how indoctrinated individuals choose to stamp out their hatreds, maybe we could launch more effective deradicalization programs.
Facebook’s plan to kill hate speech with AI…
Facebook wants to exterminate hate speech with its battalion of artificial intelligence. Supposedly, these AI can read and detect typical trigger words, like Jew, white, hate, women and black. They also look out for words in all caps and for expanded sentence structures that scientists say may indicate hateful comments. In 2017, the Anti-Defamation League (ADL) collaborated with UC Berkeley D-Lab to analyze online hate speech in order to find a way to combat it.
Brittan Heller is the director of the ADL Silicon Valley Center for Technology and Society. She wrote: “The results of the study give me a lot of hope.”
Problems with AI “hate” tools
Problems came up when other research teams, like Google, conducted tests and found that AI struggled over differences between hate speech and harmless banter. Their software gave the comment “you’re pretty smart for a girl”, an 18 percent toxicity, while it gave the comment “I love Fuhrer” only two percent. A follow up team at McGill University in Montreal, Canada, trained their AI to tell the difference between offensive and inoffensive speech by feeding it hand-picked examples of both. There were some improvements, but their machine-learning software still missed clearly offensive speech, such as “Black people are terrible.”
This inability to discriminate may be why all too often, certain Facebook users are expelled over misconstruals of their speech, while sophisticated hate speech is overlooked.
“It’s important,” George Dvorsky wrote in Gizmodo, “to recognize that hate speech can be disguised… As a current example, crypto-fascists use rhetoric, metaphor, and tricks of language to make their content seem less… fascist. It has been alleged, for example, that the Smurfs are an example of crypto-fascism. This may be an exaggeration, but how in the hell is an AI expected to pick up on this sort of subtlety when even humans can’t agree?”
Then again, some marginalized groups or individuals use “hateful” labels as ownership of derogatory terms that are often used against them (like “kike” or “queer”). For AI to succeed in eliminating hate speech, it would have to identify these subtleties and to sort out where certain types of speech are permissible and where others aren’t. In other words, “artificial” intelligence would have to be “human” to win complex and nuanced judgment.
Sara Wachter-Boettcher, author of “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech”, agrees.
“Since scientists,” she told Gizmodo, “are nowhere close to ready to answer these kinds of questions… the technical piece is largely irrelevant, because it won’t solve the problem.”
To eradicate hate speech, you have to get at it at its core.
Results of my research
After spending more than six years analyzing past and aspiring terrorists, I noticed a recurring pattern, where an unexpected phenomenological situation, or occurrence, happened that collided with the aggressor’s faith and forced them to change their minds. Examples included unexpected kindness from the demonized other, encounters with “outsiders” where behavior and appearances contradicted group teachings, sudden encounters with Biblical (or other source text) that disproved group indoctrination, incidents or occurrences that discredited group teachings.
For instance, one ex-Evangelist recounted how moved he was by seeing a minister of a church he and his group threw stones at, take his own food to feed homeless people under a bridge. It suddenly occurred to him: This was Christian love, not that practiced by his own group who strapped placards to shoulders and blasted this Church for their “gay-lovers”.
Another told me how he changed his mind in jail after meditating and reading the Koran in English. Accustomed to reading it in Arabic, he suddenly saw discrepancies between text and group teachings that bothered him.
A third mentioned how his missionary activities brought him into contact with “infidels,” and how he suddenly realized that the same God gives different commands to different groups that results in members harming themselves and others.
Faith is the shutters over the mind. In each of these cases, it was a smash against reality that dented the shutters and splintered faith.
As someone told me, the most effective method for getting her Muslim friends to think was where she subtly and persistently pointed them to literal Koranic or Hadith texts and to real-life events that invalidated their opinions. No amount of trolling, arguments, dispute – she told me – would have the same effect.
What we have, then, is attempts to show the fallacy of hate speech (and by extension of prejudiced opinions and behavior) through consistent, decent, thought-filled actions.
Never in a hundred years could Facebook’s “artificial” intelligence achieve these results.