Years after MusicTechPolicy started calling out YouTube for posting jihadi videos and war porn, top journalists are reporting in an exclusive story for Reuters that Google and Facebook claim to be using an algorithm to take down terror videos promoting jihad.
Why did they wait so long? Because Google and Facebook allowed the problem to balloon with full knowledge that terror groups were using their platform to recruit and grow their organizations all over the world. They did it for the same reason they have problems with illegal drug advertising and human trafficking among other things: because they make enough money off advertising on these videos before they are embarrassed into taking them down.
As Chris Castle says, “The fastest way to get a jihadi video removed from YouTube is to blog about it on MusicTechPolicy.” Of course the real issue is the same for all the illegal videos–how do they get onto YouTube and Facebook in the first place?
Some of the web’s biggest destinations for watching videos have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process.
The move is a major step forward for internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.
YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources said.
The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for “hashes,” a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.
Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before.
Extremist content exists on a spectrum, Hughes said, and different web companies draw the line in different places.
Most have relied until now mainly on users to flag content that violates their terms of service, and many still do. Flagged material is then individually reviewed by human editors who delete postings found to be in violation.