YouTube’s AI tech flags 83% of extremist videos taken down
October 19, 2017 04:21 pm
YouTube has revealed that more than 80 per cent of violent and extremist videos taken down in September were flagged up by its new spam-fighting artificial intelligence tools.
The Google-owned company began applying machine learning algorithms to its videos in June, so that it could quickly spot hateful content and flag it to human reviewers.
“This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user,” Kent Walker, Google’s senior vice-president and general counsel wrote in the FT in June.
“We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove such content.”
Extremism-identifying technology is part of a big push by the major US tech giants to address widespread criticism by US and European governments that they should be doing more to tackle violent propaganda. Last month, Twitter announced it had taken down nearly 300,000 terrorist accounts in the first six months of this year, almost all of which were spotted by its algorithms.
Facebook said in June it had invested in automated technology to improve detection and takedown rates of extremist accounts. According to YouTube’s latest figures, algorithms were responsible for flagging more than 83 per cent of the extremist videos taken down in September, a figure that was up 8 percentage points on the previous month.
The total number of videos being flagged has also gone up since machine learning has been adopted, a YouTube spokesperson said, adding that human reviewers still looked at every piece of content flagged by AI before removing it. The algorithms work by crawling YouTube looking for various signals, including tags, titles, images and colour schemes, pulling in content that they think is potentially problematic.
This is then escalated to human reviewers, who look at nuance and apply their judgment to identify if the content is intending to glorify violence or is just documenting it. A spate of terrorist attacks in London and Manchester this year has upped the pressure on tech companies to show they are cracking down on the proliferation of terrorist content.
Andrew Parker, head of MI5, told journalists in London on Tuesday that technology groups had an ethical responsibility to work with the authorities to tackle extremist content online. He highlighted the way terrorists could access dangerous information such as how to build a home-made bomb on the web, as well as encrypted apps.
“I am calling attention to the fact that technology continues to accelerate so the response to it and the way those partnerships work need to keep advancing if we are to continue to be able to tackle the problem,” he said.
“I believe there is a responsibility on the companies that offer those services to help governments to be able to stop the worst excesses of human criminal behaviour, particularly terrorism.”
- The Financial Times