Facebook tightens policies to prevent suicide, self-harm
The social media giant has been working on suicide prevention measures since a few years now and in 2017, it introduced its artificial intelligence (AI)-based suicide prevention tools.
By : migrator
Update: 2019-09-11 10:41 GMT
San Francisco
"Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad content online and newsworthy depiction of suicide," Antigone Davis, Global Head of Safety, Facebook, wrote in a blog post on Tuesday.
The social media giant has been working on suicide prevention measures since a few years now and in 2017, it introduced its artificial intelligence (AI)-based suicide prevention tools.
"...We've made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery," Davis added.
Facebook-owned Instagram stared hiding self-harm images behind "sensitivity screens" this year.
The photo-sharing platform also prevents self-harm content from appearing in its "Explore" tab and it has taken steps to prohibit content that may promote eating disorders.
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android