Begin typing your search...

    Online hate speech can be 'quarantined', says study

    Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining, researchers say

    Online hate speech can be quarantined, says study
    X

    London

    The spread of hate speech and abuse on social media platforms could be tackled using the same 'quarantine' approach deployed to combat malicious software, according to researchers.

    An engineer and a linguist from University of Cambridge published a proposal in the journal Ethics and Information Technology that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.

    "Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining," said co-author and linguist Dr Stefanie Ullman. 

    "In fact, a lot of hate speech is actually generated by software such as Twitter bots."

    Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual and graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats.

    As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended "psychological harm" is inflicted, with armies of moderators required to judge every case.

    To address this, Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.

    As these algorithms get refined, potential hate speech could be identified and "quarantined". 

    Users would receive a warning alert with a "Hate O'Meter" -- the hate speech severity score - the sender's name, and an option to view the content or delete unseen.

    This approach is akin to spam and malware filters, and researchers believe it could dramatically reduce the amount of hate speech people are forced to experience. 

    They are aiming to have a prototype ready in early 2020.

    "Companies like Facebook, Twitter and Google generally respond reactively to hate speech. This may be okay for those who don't encounter it often. For others it's too little, too late," said study co-author and engineer Dr Marcus Tomalin.

    "Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation," he said.

    Former US Secretary of State Hillary Clinton recently told a UK audience that hate speech posed a "threat to democracies", in the wake of many women MPs citing online abuse as part of the reason they will no longer stand for election.

    In a Georgetown University address, Facebook CEO Mark Zuckerberg spoke of "broad disagreements over what qualifies as hate" and argued: "we should err on the side of greater expression".

    The researchers say their proposal is not a magic bullet, but it does sit between the "extreme libertarian and authoritarian approaches" of either entirely permitting or prohibiting certain language online.

     "Our system will flag when you should be careful, but it's always your call. It doesn't stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate," the researchers wrote.

     The project has also begun to look at "counter-speech" -- the ways people respond to hate speech. 

     The researchers intend to feed into debates around how virtual assistants such as 'Siri' should respond to threats and intimidation.

    Visit news.dtnext.in to explore our interactive epaper!

    Download the DT Next app for more exciting features!

    Click here for iOS

    Click here for Android

    migrator
    Next Story