The company recently started experimenting using AI to understand languages or texts that might be advocating terrorism.
Bickert and Fishman emphasized that the company's position on the issue is simple-terrorism has no place on Facebook.
The company says it's expanded the use of artificial intelligence to identify possible terrorist postings and even block or remove them without human intervention. In rare cases, when they uncover evidence of imminent harm, they promptly inform authorities about it.
"We want to answer those questions head on".
In February, Mark Zuckerberg shared a 6,000 word manifesto on how Facebook is using AI to disrupt terrorist activity. "This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site", the company explains. These signals will be fed into a machine-learning system and that machine will, over time, be able to detect similar posts.
Because terrorists often operate in clusters, Facebook is also employing algorithms to identify related material connected to Pages, groups, posts or profiles that have already been linked to terror activity.
The blog post also highlighted Facebook's efforts to fund and train anti-extremist groups to produce counternarratives, or online content created to undercut terrorist propaganda and dissuade people from joining terrorist groups.
"We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account", it said. In a post Thursday, Facebook detailed several initiatives to help combat terrorism efforts on its network.
Facebook is also trying to use these techniques to work across all its platforms, including WhatsApp and Instagram. Other questions Schrage said Facebook would address include removing controversial posts and images, fake news and the effect of social media on democracy.
Britain's interior ministry welcomed Facebook's efforts but said technology companies needed to go further. "Figuring out what supports terrorism and what does not isn't always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context". Last month, Prime Minister Theresa May of Britain announced that she would challenge internet companies - including Facebook - to do more to monitor and stop them.
Facebook said it has 150 people on its counterterrorism team, constantly fine-tuning tactics for taking down terrorist content.
They have accused Facebook, Google parent Alphabet Inc. and others of complacency over the proliferation of inappropriate content - in particular, posts or videos deemed as extremist propaganda or communication - on their social networks. This new emphasis from Zuckerberg has followed uproar over Facebook's role in the proliferation of false news accounts during the USA election campaign a year ago, as well as the spread of extreme content, such as videos of murder, posted to Facebook.