Site   Web

June 16, 2017

Facebook to Use AI to Remove Extremist Content From its Platform

facebook

Facebook is turning to artificial intelligence to help in the eradication of terrorist content on its social network.

The social media firm has been under fire, particularly in France, Germany and the U.K., for failure to remove extremist content on its site.The three countries, which have suffered civilian casualties at the hands of Islamist extremists, have demanded Facebook and other social networks, like Twitter, step up their efforts to rid their platforms of extremist content. Facebook has even been threatened with fines if it does not fix the ongoing problem.

The company’s announcement of using AI as part of the process to find and remove such content may help appease its critics.

“We want to find terrorist content immediately, before people in our community have seen it,” Facebook director of global policy management Monika Bicker and counter-terrorism policy manager Brian Fishman say in a blog post. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook.”

For now, Facebook is focusing its energies on dealing with content about ISIS, Al Qaeda and their affiliates, although its efforts will “expand to other terrorist organizations in due course.”

Aside from its technical solutions, Facebook is also relying on help from terrorism and safety specialists.

“At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility,” the blog post reads. “This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. Within this specialist team alone, we speak nearly 30 languages.”

But Facebook says it also needs the aid of its nearly two billion users in the form of reports and reviews. In other words, the social network needs its users to flag anything inappropriate that they see.

The social networking firm is also partnering with other tech firms to battle hate speech and is working with government bodies that offer a level of expertise that Facebook simply does not have.

“We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late,” the blog post reads. “We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future.”

Facebook’s lengthy blog post, with in-depth details on its plan to combat terrorist content, can be read in its entirety here.


avatar

Jennifer Cowan is the Managing Editor for SiteProNews.

One Response to “Facebook to Use AI to Remove Extremist Content From its Platform

    avatar Yasar Ali says:

    Very Informative Post!
    Thank you.

Submit a Comment

Your email address will not be published. Required fields are marked *






You may use these HTML tags and attributes: <a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Please leave these two fields as-is:

Protected by Invisible Defender. Showed 403 to 5,786,328 bad guys.

css.php