We’re told that Article 13 will ‘break the internet’, and one of the ways it would cause that is by forcing YouTube and similar sites to shut down, or block EU users from uploading or accessing the site, or impose some other draconian resolution. This would supposedly happen because there is far too much content being uploaded – e.g. 400 hours of video uploaded to YouTube per minute, according to Google’s own data – and Article 13 makes the content-sharing sites liable for all of it. Opponents claim this is insurmountable because:
- It’s too much content for a human moderating team to manually check every upload.
- It’s too complex a task for a computer to automatically check without causing many false positives (i.e. videos judged to infringe when a human assessment would rule otherwise).
These two points are true, at least for a massive general purpose site such as YouTube, maybe even smaller ones such as Tumblr. However, what Article 13 opponents are missing is that not only is it not a choice between one or the other, but there are other tools, processes, and policies available to augment content checking by humans and computers. For example, why is Wikipedia (another top-10 website, like YouTube) not rife with copyright infringement, even though you don’t need an account to edit it? Why do we not keep hearing tales of Bandcamp being full of other people’s music, even though it’s obviously designed to allow users to distribute music to the public? It turns out that ensuring your platform is mostly free of infringement is not as impossible as some people would have you believe.
Two weeks ago, Dr. James Heilman discovered something strange. The Canadian emergency room physician and avid Wikipedia contributor noticed that DrugBank, an online database for drug information, was copying text directly from Wikipedia. Although Heilman considers Wikipedia’s medical content to be of surprisingly good quality, he was concerned—because he didn’t just find DrugBank copying and citing Wikipedia; he had also found several examples of Wikipedia likewisecopying and citing DrugBank.
When you ask Amazon’s Alexa, “What is Wikipedia?” it’ll tell you this: “Wikipedia is a multilingual, web-based, free encyclopedia based on a model of openly editable content.” Alexa took this line directly from Wikipedia’s entry on Wikipedia, as it does with many of its answers. Perhaps what it should have said was this: “Wikipedia is the source from which I take much of my information, without credit, contribution, or compensation.”
That’s about to change. Or is it? Amazon recently donated $1 million to the Wikimedia Endowment, a fund that keeps Wikipedia running, as “part of Amazon’s and CEO Jeff Bezos’ growing work in philanthropy,” according to CNET. It’s being framed as a “gift,” one that—as Amazonputs it—recognizes their shared vision to “make it easier to share knowledge globally.” Amazon also noted the ability for users to easily donate to Wikimedia through the Alexa Donations feature, with the voice command “Alexa, donate to Wikipedia.”…
But it’s not just the fact that this donation is, in the scheme of things, paltry. It’s that this “endowment” is dwarfed by what Amazon and its ilk get out of Wikipedia—figuratively and literally.
[An excellent post by songwriter and BASCA chair Crispin Hunt on the remarkable disinformation campaign being waged by legacy tech companies against safe harbor reform in Europe.]
A recent article by Rhett Jones, which appeared in Gizmodo, perfectly encapsulated the feverish disinformation campaign around Article 13 being undertaken by US tech companies and their minions. So I thought it would be worth taking a few minutes to help Parliamentarians to closely examine it.
Let’s start with the title: The End of All That’s Good and Pure About the Internet.
One might be forgiven for thinking that was written ten years ago when the promise of the internet was still bright – and not blighted by things like revenge porn, doxing, phishing, sex trafficking of minors, rampant theft, fake news, interference in elections, massive privacy violations, etc.
But no, Rhett Jones, and the entire campaign against Article 13, is very much premised on this idea that the internet as we know it is “good and pure,” and that any change to its governance would end this wondrous medium.
Google’s top policy executive [Karan Bhatia] is reorganizing the company’s worldwide influence operation, according to an internal email obtained by Axios.
Why it matters: The long-planned shake-up comes as the search giant faces newly hostile regulators around the world.
The EU has finally settled on the wording of its Digital Single Market copyright reform package, a three-years-in-the-making effort, greeting the agreement with a sizzling rebuke of the “misinformation campaigns” around the measures….
In a press conference today announcing the measures, MEP and Conservative legal affairs spokesman Sajjad Karim said the process had highlighted a disturbing development in the “political culture”.
“The ability of some of the platforms to carry out campaigns [against the legislation] is a good thing,” Karim said. “But the way some of these have been carried out really has been against the grain of how a democratic society should function.”
Individual staff members had been targeted, he said, by “elements that have misled the public about what we’re trying to achieve, and we’re sure will mislead the public as to what we have actually achieved. It strengthens our resolve to make sure we don’t allow European citizens to fall victim to that sort of misinformation.”
It’s hard to believe that after a good ten years of being called out, YouTube still–still–cannot manage to stop neo-Nazi and white supremacist material from getting posted on its network. We’ve been calling out YouTube for these inexcusable failures again and again and again. And yet they keep recycling the safe harbor as an alibi–and they’re doing it again in Europe on Article 13.
I can understand that YouTube doesn’t want to “censor” users and there may be close cases from time to time. For example, I could understand why YouTube CEO Susan Wojcicki might not want to take down videos from Seeking Arrangement that encourages young women into a “sugar daddy” relationship to pay for college and health care.
Sure, one of her Google colleagues was murdered by a woman he met through Seeking Arrangement. Maybe Seeking Arrangement is a close case, particularly for a company that opposed the Stop Enabling Sex Traffickers Act.
But you know what’s not a close case? It’s right there in the title of the song–“Who Likes a N—“. You would think that one would get picked up in a simple text filter of debased language. But it wasn’t ten years ago and it still isn’t. Not a close case.
And then there’s “Stand Up and Be Counted” by the White Riders. It’s not that hard to figure out by listening to any of the many versions of this song that it’s a recruiting song for the Klu Klux Klan. And it’s not that YouTube doesn’t know it–this version of the hate song has clearly been filtered by YouTube–oh, sorry. Not by YouTube, but by the “YouTube community.” But why is it that a KKK recruiting song doesn’t violate YouTube’s terms of service if it doesn’t shock Susan Wojcicki’s conscience?
Today David Lowery called out YouTube and CD Baby for allowing hate rock to be distributed on their platforms. Within hours, CD Baby pulled the account. But not YouTube.
Let’s understand a couple things. First, this is not hard. The Anti Defamation League and the Southern Poverty Law Center have actual lists of these bands. Both Music Tech Policy and The Trichordist have been hammering this issue for years. Simple word searches could accomplish a large percentage of the task–the N word, KKK recruiting and images of Adolph Hitler are not close cases.
And let’s understand something else. When users post movies, television shows and recorded music on YouTube, all of those materials have gone through some kind of legal review for standards and practices. That doesn’t mean there’s no fair use or that there are no parodies. It does mean that a human has thought about it because free expression is a judgement call.
Free expression is deserving of human examination. You cannot create a machine that will do this for you. You cannot rely on crowd sourcing to stop all uses of these vile terms and images–because in every crowd there’s someone who thinks it’s all just fine. That’s why they’re called mobs.
YouTube, Facebook and all the Article 13 opponents actually are using a complete spectrum of review. The problem is that they are cost shifting the human review onto artists and to a lesser extent their users for two reasons. First and foremost is that they hope not to be caught. That’s what the safe harbor is really all about. The value gap is just a part of it–the other part is the values gap. How do these people sleep at night?
But I firmly believe that the real reason that they shift the human cost onto those who can least afford it is because they’re too cheap to pay for it themselves. They are willing to take the chance because getting caught so far has been a cost of doing business.
The real cost of their business is the corrosive effect that they have on our discourse, our families and our children. There has to be a way to make YouTube responsible for their choices–and CD Baby showed today that it’s not only possible but necessary.
If YouTube and their paid cronies want to try to convince legislators that they deserve special protection, they need to live up to the standard that CD Baby set today. And they need to do that before they get any further special treatment.
As we’ve said for years, the safe harbor is a privilege not an alibi.