We’re told that Article 13 will ‘break the internet’, and one of the ways it would cause that is by forcing YouTube and similar sites to shut down, or block EU users from uploading or accessing the site, or impose some other draconian resolution. This would supposedly happen because there is far too much content being uploaded – e.g. 400 hours of video uploaded to YouTube per minute, according to Google’s own data – and Article 13 makes the content-sharing sites liable for all of it. Opponents claim this is insurmountable because:
- It’s too much content for a human moderating team to manually check every upload.
- It’s too complex a task for a computer to automatically check without causing many false positives (i.e. videos judged to infringe when a human assessment would rule otherwise).
These two points are true, at least for a massive general purpose site such as YouTube, maybe even smaller ones such as Tumblr. However, what Article 13 opponents are missing is that not only is it not a choice between one or the other, but there are other tools, processes, and policies available to augment content checking by humans and computers. For example, why is Wikipedia (another top-10 website, like YouTube) not rife with copyright infringement, even though you don’t need an account to edit it? Why do we not keep hearing tales of Bandcamp being full of other people’s music, even though it’s obviously designed to allow users to distribute music to the public? It turns out that ensuring your platform is mostly free of infringement is not as impossible as some people would have you believe.