As Facebook sought to become the world’s dominant social media service, it struck agreements allowing phone and other device makers access to vast amounts of its users’ personal information.
Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.
But the partnerships, whose scope has not previously been reported, raise concerns about the company’s privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission.
They dangle the 4 per cent of annual turnover fines as a maximum possible penalty – €3.7bn, €1.3bn, €1.3bn, and €1.3bn, respectively – though regulators have stressed they won’t be handing out the top level fines willy-nilly.
If you have been to any media conference or read any trade magazine over the past several years, you would know that ‘big data’ was going to be the answer to all of the woes of the music industry.
The idea was simple: utilizing the collection of large amounts of data and profiling those generating that data (citizens) could inform you how to extract more value from your customers.
Essentially, big data was a solution pitched and sold to the music industry as a panacea to fan engagement problems….While big data seems very attractive, using personal data and profiling fans may in fact turn out to be, like oil and plastics, already outdated and toxic.
A federal judge in California has ruled that Facebook can be sued in a class-action lawsuit brought by users in Illinois who say the social network improperly used facial recognition technology on their uploaded photographs.
The plaintiffs are three Illinois Facebook users who sued under a state law that says a private entity such as Facebook can’t collect and store a person’s biometric facial information without their written consent. The law, known as the Biometric Information Privacy Act, also says that information that uniquely identifies an individual is, in essence, their property. The law prohibits a private entity from selling, leasing, trading or otherwise profiting from a person’s biometric information.
U.S. District Judge James Donato ruled that the lawsuit can proceed as a class action representing potentially millions of Facebook users in Illinois. The judge is based in San Francisco where the case had been moved at Facebook’s request.
Last month, CreativeFuture asked you, our followers, what you thought about platform responsibility. Little did we know that, in the meantime, the issue would start taking over the front pages of our newspapers and websites!
In a nutshell, the issue is whether Google, Facebook, and their Silicon Valley peers should take responsibility for the ways their platforms are used to violate our laws and harm society.
Even before the House and Senate passed landmark legislation to demand accountability from the tech giants and even before Facebook’s Cambridge Analytica mess exploded, we asked your views on a few simple questions that came down to one thing: do you believe that Google and Facebook should be more responsible?
The answer, overwhelmingly, was that you do – and you had a lot to add in response. Here are just some of your comments:
“The organizations who own these platforms make enormous profits. They have a responsibility to make sure the platforms are not being used to harm others.”
“They have the greatest ability to do so. And a moral responsibility. Just because it’s a newer technology doesn’t exempt them.”
“Because if they are able to control it, and I believe that they can, then they should be held accountable and responsible if they don’t.”
“They are providing the service that is being used for these malicious acts. They are responsible! They need to find a solution and be held accountable!”
“Violations of the law should be prosecuted. To avoid prosecution, they should take proactive steps to prevent violations.”
“They created these platforms, they should be responsible for them. They are beyond wealthy from them and can afford to police them. U.S. laws should apply everywhere in the U.S., including [the internet]!”
“Times change, services change, service providers change. Rules must keep up with changes.”
“Hostile foreign governments are using internet social platforms to publish untrue propaganda in order to destabilize our nation … if they can’t or won’t [monitor their platforms], they should be heavily fined and shut down. It is their responsibility for doing business in this country.”
“Responsibility is part of having a business.”
“[Google and Facebook] are no different from any other corporation which has the responsibility not to enable breaking the law. They are complicit and just a guilty as those breaking the law.”
“I can’t believe we even have to ask this question. I am sick and tired of corporations bearing no responsibility for the effects of their services on people. If a crime is occurring and the corporation looks the other way, that cannot be allowed any longer.”
“They don’t want the responsibility of accountability because complying would eat into profits with no returns. So, it will NEVER happen unless it is legislated.”
“The internet has become perhaps the single most important source of information and communication in the world. It cannot just rake in profits and not be responsible for what they have created.”
This week, on April 10 and 11, Facebook’s CEO Mark Zuckerberg will testify twice before Congress on the issues facing his company, and Silicon Valley generally. We expect that Zuckerberg will be very well prepped by his army of lawyers. We anticipate that he will try to reassure Congress that Facebook is doing all it can to (1) protect the privacy of its users; (2) prevent foreign influence on its advertising networks; and (3) stop rampant violations of the law from being carried out on their platform.
But Congress should not settle for head-pats and platitudes. They need to ask some hard and direct questions. We hope they will include the following…
Prior to 1996, Americans had a well-established, offline right to privacy based on the Fourth Amendment of the U.S. Constitution and several strong federal privacy statutes passed in 1974, 1974, 1978, 1984, 1986, 1988, 1994, and 1996.
In 1996, Congress also unwittingly and effectively set in motion an exceptional erosion of Americans’ established offline right to privacy when it passed the Telecom Act, which specially exempted and immunized Internet platforms from normal governmental accountability and consumer protection responsibilities.
Over two decades, this Wild West Internet industrial policy, which has been interpreted by Internet platforms and the courts, has eroded bit by bitAmericans established offline right to privacy witha de facto U.S. online privacy piracy policy today.
In practice, America’s Internet law has transmogrified into open season on American consumers’ personal data. That’s because the U.S. Government effectively immunized Internet platforms’ in advance, from civil liability for the collection, use, and monetization of personal information online, that otherwise could be illegal to do offline.
[This is a highly insightful post by professors at the Terry College of Business at the University of Georgia (ARW readers will recall that Terry hosted the first Artist Rights Symposium a couple months ago where I spoke and of course is where David Lowery teaches. Gunton & Hendrix make the case for regulating Facebook which make for required reading alongside Jonathan Taplin’s work. (Prof. Taplin keynoted the Artist Rights Symposium.)]
On Wednesday, Mark Zuckerberg finally ended days of silence and set out on a media tour to explain Facebook’s role in the Cambridge Analytica data scandal. CNN’s Laurie Segall asked him if he was worried about Facebook facing government regulation after what he admitted was a massive breach of trust between the platform and its users. “I actually am not sure we shouldn’t be regulated,” he said. “I think in general technology is an increasingly important trend in the world. I think the question is more what is the right regulation rather than ‘yes or no should we be regulated?’”
It is certainly time for a robust international conversation about how best to regulate social media platforms, and data privacy more generally. Major technology companies — including Facebook, Google, Twitter, SNAP and others — define the information ecosystem in much of the world. Barely regulated and rarely held accountable, these companies are completely transforming the public sphere. While these platforms present new opportunities to connect people around the world, they also create new spaces for bad actors that wish to spread misinformation, encourage terrorism or incite violence, engage in online harassment, steal personal data, restrict free speech and suppress dissent.
As this urgent conversation gets underway, here are some factors to consider when imagining new regulations…