Facebook Likely To Flag Off Inappropriate Livestreams

It is well-known that Facebook has long been vigilant about keeping your News Feed free of inappropriate content. That’s relatively simple when you’re talking about material that can be reviewed in full after it’s posted, but what happens if something goes wrong during a livestream?
A new initiative is reportedly in the works to build up the social network’s flagging system for offensive content in a particularly difficult area: Facebook Live.
The social networking giant has previously relied in part on a system that depended on users to report offensive materials, which are then checked by Facebook employees against community standards.
But at a recent roundtable at Facebook HQ in Menlo Park, Joaquin Candela, the company’s director of applied machine learning, said that they’re testing artificial intelligence that can detect offensive content.
The new flagging protocol is an algorithm that detects nudity, violence, or any of the things that are not according to the company’s policies.
It is to be mentioned that such an algorithm was tested back in June to screen videos posted in support of extremist groups, but going forward it will be applied to Facebook Live broadcasts to keep violent events and amateur erotica off the network.
According to Candela, the AI system is still being honed, and it will likely act as an alert, rather than a one-stop jury, judge and executioner of explicit streams.
As helpful as an AI-flagging system might be, there are still major questions about what should and shouldn’t be considered inappropriate. Facebook came under fire back in September after it removed a famous image from the Vietnam War and that was under the old system, with a human moderator making the decision.
Yann LeCun, Facebook’s director of AI research, declined to give a specific comment but did address censorship in broader terms.

Just note, this AI flagging system is only being tested for now and it’s not yet in use on the Facebook you scroll through every day. Still, there’s no doubt it is coming soon, once the company has determined that an AI can be trusted with our most sensitive content.

Leave a Reply

Your email address will not be published. Required fields are marked *