Facebook under fresh political pressure as UK watchdog calls for “ethical pause” of ad ops

The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (£500k) under the country’s 1998 data protection regime for breaches related to the Cambridge Analytica data misuse scandal.

But that’s just the tip of the regulatory missiles now being directed at the platform and its ad-targeting methods — and indeed, at the wider big data economy’s corrosive undermining of individuals’ rights.

Alongside yesterday’s update on its investigation into the Facebook-Cambridge Analytica data scandal, the Information Commissioner’s Office (ICO) has published a policy report — entitled Democracy Disrupted? Personal information and political influence — in which it sets out a series of policy recommendations related to how personal information is used in modern political campaigns.

In the report it calls directly for an “ethical pause” around the use of microtargeting ad tools for political campaigning — to “allow the key players — government, parliament, regulators, political parties, online platforms and citizens — to reflect on their responsibilities in respect of the use of personal information in the era of big data before there is a greater expansion in the use of new technologies”.

The watchdog writes [emphasis ours]:

Rapid social and technological developments in the use of big data mean that there is limited knowledge of – or transparency around – the ‘behind the scenes’ data processing techniques (including algorithms, analysis, data matching and profiling) being used by organisations and businesses to micro-target individuals. What is clear is that these tools can have a significant impact on people’s privacy. It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld. When the purpose for using these techniques is related to the democratic process, the case for high standards of transparency is very strong.

Engagement with the electorate is vital to the democratic process; it is therefore understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing. Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default. This could have a damaging long-term effect on the fabric of our democracy and political life.

It also flags a number of specific concerns attached to Facebook’s platform and its impact upon people’s rights and democratic processes — some of which are sparking fresh regulatory investigations into the company’s business practices.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” it writes. “Whilst these concerns about Facebook’s advertising model exist generally in relation to its commercial use, they are heightened when these tools are used for political campaigning. Facebook’s use of relevant interest categories for targeted advertising and it’s, Partner Categories Service are also cause for concern. Although the service has ceased in the EU, the ICO will be looking into both of these areas, and in the case of partner categories, commencing a new, broader investigation.”

The ICO says its discussions with Facebook for this report focused on “the level of transparency around how Facebook user data and third party data is being used to target users, and the controls available to users over the adverts they see”.

Among the concerns it raises about what it dubs Facebook’s “very complex” online targeting advertising model are [emphasis ours]:

Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.

The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.

The company has been criticized for years for confusing and complex privacy controls. But during the investigation, the ICO says it was also not provided with “satisfactory information” from the company to understand the process it uses for determining what interest segments individuals are placed in for ad targeting purposes.

“Whilst Facebook confirmed that the content of users’ posts were not used to derive categories or target ads, it was difficult to understand how the different ‘signals’, as Facebook called them, built up to place individuals into categories,” it writes.

Similar complaints of foot-dragging responses to information requests related to political ads on its platform have also been directed at Facebook by a parliamentary committee that’s running an inquiry into fake news and online disinformation — and in April the chair of the committee accused Facebook of “a pattern of evasive behavior”.

So the ICO is not alone in feeling that Facebook’s responses to requests for specific information have lacked the specific information being sought. (CEO Mark Zuckerberg also annoyed the European Parliament with highly evasive responses to their highly detailed questions this Spring.)

Meanwhile, a European media investigation in May found that Facebook’s platform allows advertisers to target individuals based on interests related to sensitive categories such as political beliefs, sexuality and religion — which are categories that are marked out as sensitive information under regional data protection law, suggesting such targeting is legally problematic.

The investigation found that Facebook’s platform enables this type of ad targeting in the EU by making sensitive inferences about users — inferred interests including communism, social democrats, Hinduism and Christianity. And its defense against charges that what it’s doing breaks regional law is that inferred interests are not personal data.

However the ICO report sends a very chill wind rattling towards that fig leaf, noting “there is a concern that by placing users into categories, Facebook have been processing sensitive personal information – and, in particular, data about political opinions”.

It further writes [emphasis ours]:

Facebook made clear to the ICO that it does ‘not target advertising to EU users on the basis of sensitive personal data’… The ICO accepts that indicating a person is interested in a topic is not the same as formally placing them within a special personal information category. However, a risk clearly exists that advertisers will use core audience categories in a way that does seek to target individuals based on sensitive personal information. In the context of this investigation, the ICO is particularly concerned that such categories can be used for political advertising.

The ICO believes that this is part of a broader issue about the processing of personal information by online platforms in the use of targeted advertising; this goes beyond political advertising. It is clear from academic research conducted by the University of Madrid on this topic that a significant privacy risk can arise. For example, advertisers were using these categories to target individuals with the assumption that they are, for example, homosexual. Therefore, the effect was that individuals were being singled out and targeted on the basis of their sexuality. This is deeply concerning, and it is the ICO’s intention as a concerned authority under the GDPR to work via the one-stop-shop system with the Irish Data Protection Commission to see if there is scope to undertake a wider examination of online platforms’ use of special categories of data in their targeted advertising models.

So, essentially, the regulator is saying it will work with other EU data protection authorities to push for a wider, structural investigation of online ad targeting platforms which put users into categories based on inferred interests — and certainly where those platforms are allowing targeting against special categories of data (such as data related to racial or ethnic origin, political opinions, religious beliefs, health data, sexuality).

Another concern the ICO raises that’s specifically attached to Facebook’s business is transparency around its so-called “partner categories” service — an option for advertisers that allows them to use third party data (i.e. personal data collected by third party data brokers) to create custom audiences on its platform.

In March, ahead of a major update to the EU’s data protection framework, Facebook announced it would be “winding down” this service down over the next six months.

But the ICO is going to investigate it anyway.

“A preliminary investigation of the service has raised significant concerns about transparency of use of the [partner categories] service for political advertising and wider concerns about the legal basis for the service, including Facebook’s claim that it is acting only as a processor for the third-party data providers,” it writes. “Facebook announced in March 2018 that it will be winding down this service over a six-month period, and we understand that it has already ceased in the EU. The ICO has also commenced a broader investigation into the service under the DPA 1998 (which will be concluded at a later date) as we believe it is in the public interest to do so.”

In conclusion on Facebook the regulator asserts the company has not been “sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign”.

“Individuals can opt out of particular interests, and that is likely to reduce the number of ads they receive on political issues, but it will not completely block them,” it points out. “These concerns about transparency lie at the core of our investigation. Whilst these concerns about Facebook’s advertising model exist in relation in general terms and its use in the commercial sphere, the concerns are heightened when these tools are used for political campaigning.”

The regulator also looked at political campaign use of three other online ad platforms — Google, Twitter and Snapchat — although Facebook gets the lion’s share of its attention in the report given the platform has also attracted the lion’s share of UK political parties’ digital spending. (“Figures from the Electoral Commission show that the political parties spent £3.2 million on direct Facebook advertising during the 2017 general election,” it notes. “This was up from £1.3 million during the 2015 general election. By contrast, the political parties spent £1 million on Google advertising.”)

The ICO is recommending that all online platforms which provide advertising services to political parties and campaigns should include experts within the sales support team who can provide political parties and campaigns with “specific advice on transparency and accountability in relation to how data is used to target users”.

“Social media companies have a responsibility to act as information fiduciaries, as citizens increasingly live their lives online,” it further writes.

It also says it will work with the European Data Protection Board, and the relevant lead data protection authorities in the region, to ensure that online platforms comply with the EU’s new data protection framework (GDPR) — and specifically to ensure that users “understand how personal information is processed in the targeted advertising model, and that effective controls are available”.

“This includes greater transparency in relation to the privacy settings, and the design and prominence of privacy notices,” it warns.

Facebook’s use of dark pattern design and A/B tested social engineering to obtain user consent for processing their data at the same time as obfuscating its intentions for people’s data has been a long-standing criticism of the company — but one which the ICO is here signaling is very much on the regulatory radar in the EU.

So expecting new laws — as well as lots more GDPR lawsuits — seems prudent.

The regulator is also pushing for all four online platforms to “urgently roll out planned transparency features in relation to political advertising to the UK” — in consultation with both relevant domestic oversight bodies (the ICO and the Electoral Commission).

In Facebook’s case, it has been developing policies around political ad transparency — amid a series of related data scandals in recent years, which have ramped up political pressure on the company. But self-regulation looks very unlikely to go far enough (or fast enough) to fix the real risks now being raised at the highest political levels.

“We opened this report by asking whether democracy has been disrupted by the use of data analytics and new technologies. Throughout this investigation, we have seen evidence that it is beginning to have a profound effect whereby information asymmetry between different groups of voters is beginning to emerge,” writes the ICO. “We are a now at a crucial juncture where trust and confidence in the integrity of our democratic process risks being undermined if an ethical pause is not taken. The recommendations made in this report — if effectively implemented — will change the behaviour and compliance of all the actors in the political campaigning space.”

Another key policy recommendation the ICO is making is to urge the UK government to legislate “at the earliest opportunity” to introduce a statutory Code of Practice under the country’s new data protection law for the use of personal information in political campaigns.

The report also essentially calls out all the UK’s political parties for data protection failures — a universal problem that’s very evidently being supercharged by the rise of accessible and powerful online platforms which have enabled political parties to combine (and thus enrich) voter databases they are legally entitled to with all sorts of additional online intelligence that’s been harvested by the likes of Facebook and other major data brokers.

Hence the ICO’s concern about “developing a system of voter surveillance by default”. And why she’s pushing for online platforms to “act as information fiduciaries”.

Or, in other words, without exercising great responsibility around people’s information, online ad platforms like Facebook risk becoming the enabling layer that breaks democracy and shatters civic society.

Particular concerns being attached by the ICO to political parties’ activities include: The purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence; a lack of fair processing; and use of third party data analytics companies with insufficient checks around consent. And the regulator says it has several related investigations ongoing.

In March, the information commissioner, Elizabeth Denham, foreshadowed the conclusions in this report, telling a UK parliamentary committee she would be recommending a code of conduct for political use of personal data, and pushing for increased transparency around how and where people’s data is flowing — telling MPs: “We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.”

The ICO says now that it will work closely with government to determine the scope of the Code. It also wants the government to conduct a review of regulatory gaps.

We’ve reached out to the Cabinet Office for a government response to the ICO’s recommendations. Update: A Cabinet Office spokesperson directed us to the Department for Digital, Culture, Media and Sport — and a DCMS spokesman told us the government will wait to review the full ICO report once it’s completed before setting out a formal response.

A Facebook spokesman declined to answer specific questions related to the report — instead sending us this short statement, attributed to its chief privacy officer, Erin Egan: “As we have said before, we should have done more to investigate claims about Cambridge Analytica and take action in 2015. We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We’re reviewing the report and will respond to the ICO soon.”

Here’s the ICO’s summary of its ten policy recommendations:

1) The political parties must work with the ICO, the Cabinet Office and the Electoral Commission to identify and implement a cross-party solution to improve transparency around the use of commonly held data.

2) The ICO will work with the Electoral Commission, Cabinet Office and the political parties to launch a version of its successful Your Data Matters campaign before the next General Election. The aim will be to increase transparency and build trust and confidence amongst 5 the electorate on how their personal data is being used during political campaigns.

3) Political parties need to apply due diligence when sourcing personal information from third party organisations, including data brokers, to ensure the appropriate consent has been sought from the individuals concerned and that individuals are effectively informed in line with transparency requirements under the GDPR. This should form part of the data protection impact assessments conducted by political parties.

4) The Government should legislate at the earliest opportunity to introduce a statutory Code of Practice under the DPA2018 for the use of personal information in political campaigns. The ICO will work closely with Government to determine the scope of the Code.

5) It should be a requirement that third party audits be carried out after referendum campaigns are concluded to ensure personal data held by the campaign is deleted, or if it has been shared, the appropriate consent has been obtained.

6) The Centre for Data Ethics and Innovation should work with the ICO, the Electoral Commission to conduct an ethical debate in the form of a citizen jury to understand further the impact of new and developing technologies and the use of data analytics in political campaigns.

7) All online platforms providing advertising services to political parties and campaigns should include expertise within the sales support team who can provide political parties and campaigns with specific advice on transparency and accountability in relation to how data is used to target users.

8) The ICO will work with the European Data Protection Board (EDPB), and the relevant lead Data Protection Authorities, to ensure online platforms’ compliance with the GDPR – that users understand how personal information is processed in the targeted advertising model and that effective controls are available. This includes greater transparency in relation to the privacy settings and the design and prominence of privacy notices.

9) All of the platforms covered in this report should urgently roll out planned transparency features in relation to political advertising to the UK. This should include consultation and evaluation of these tools by the ICO and the Electoral Commission.

10)The Government should conduct a review of the regulatory gaps in relation to content and provenance and jurisdictional scope of political advertising online. This should include consideration of requirements for digital political advertising to be archived in an open data repository to enable scrutiny and analysis of the data.

AI spots legal problems with tech T&Cs in GDPR research project

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.

The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.

They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.

Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.

And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.

The AI analysis of the policies was carried out in June, after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).

In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.

But that’s where BEUC is hoping AI technology can help.

It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing “insufficient” information.

BEUC says identified problems include:

  • Not providing all the information which is required under the GDPR’s transparency obligations. “For example companies do not always inform users properly regarding the third parties with whom they share or get data from”
  • Processing of personal data not happening according to GDPR requirements. “For instance, a clause stating that the user agrees to the company’s privacy policy by simply using its website”
  • Policies are formulated using vague and unclear language (i.e. using language qualifiers that really bring the fuzz — such as “may”, “might”, “some”, “often”, and “possible”) — “which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice”

The bolstering of the EU’s privacy rules, with GDPR tightening the consent screw and supersizing penalties for violations, was exactly intended to prevent this kind of stuff. So it’s pretty depressing — though hardly surprising — to see the same, ugly T&C tricks continuing to be used to try to sneak consent by keeping users in the dark.

We reached out to two of the largest tech giants whose policies Claudette parsed — Google and Facebook — to ask if they want to comment on the project or its findings.

A Google spokesperson said: “We have updated our Privacy Policy in line with the requirements of the GDPR, providing more detail on our practices and describing the information that we collect and use, and the controls that users have, in clear and plain language. We’ve also added new graphics and video explanations, structured the Policy so that users can explore it more easily, and embedded controls to allow users to access relevant privacy settings directly.”

At the time of writing Facebook had not responded to our request for comment.

Commenting in a statement, Monique Goyens, BEUC’s director general, said: “A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.”

The group says it will be sharing the research with EU data protection authorities, including the European Data Protection Board. And is not itself ruling out bringing legal actions against law benders.

But it’s also hopeful that automation will — over the longer term — help civil society keep big tech in legal check.

Although, where this project is concerned, it also notes that the training data-set was small — conceding that Claudette’s results were not 100% accurate — and says more privacy policies would need to be manually analyzed before policy analysis can be fully conducted by machines alone.

So file this one under ‘promising research’.

“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected,” adds Goyens. “We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.

“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”

For more on the AI-fueled future of legal tech, check out our recent interview with Mireille Hildebrandt.

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place.

The audit (full report here) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing.

“This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.”

Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document raising questions over the broad scope of the data transfer; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined by the two main parties involved in the project.

In November 2016 the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps.

They also went on to roll out the Streams app for use on patients in multiple NHS hospitals — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement.

And just over a year ago the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind.

The audit of the Streams project was a requirement of the ICO.

Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action.

In a statement on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.”

In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.”

So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

And Wood’s statement pointedly reiterates that the ICO’s investigation “found a number of shortcomings in the way patient records were shared for this trial”.

“[P]art of the undertaking committed Royal Free to commission a third party audit. They have now done this and shared the results with the ICO. What’s important now is that they use the findings to address the compliance issues addressed in the audit swiftly and robustly. We’ll be continuing to liaise with them in the coming months to ensure this is happening,” he adds.

“It’s important that other NHS Trusts considering using similar new technologies pay regard to the recommendations we gave to Royal Free, and ensure data protection risks are fully addressed using a Data Protection Impact Assessment before deployment.”

While the report is something of a frustration, given the glaring historical omissions, it does raise some points of interest — including suggesting that the Royal Free should probably scrap a Memorandum of Understanding it also inked with DeepMind, in which the pair set out their ambition to apply AI to NHS data.

This is recommended because the pair have apparently abandoned their AI research plans.

On this Linklaters writes: “DeepMind has informed us that they have abandoned their potential research project into the use of AI to develop better algorithms, and their processing is limited to execution of the NHS AKI algorithm… In addition, the majority of the provisions in the Memorandum of Understanding are non-binding. The limited provisions that are binding are superseded by the Services Agreement and the Information Processing Agreement discussed above, hence we think the Memorandum of Understanding has very limited relevance to Streams. We recommend that the Royal Free considers if the Memorandum of Understanding continues to be relevant to its relationship with DeepMind and, if it is not relevant, terminates that agreement.”

In another section, discussing the NHS algorithm that underpins the Streams app, the law firm also points out that DeepMind’s role in the project is little more than helping provide a glorified app wrapper (on the app design front the project also utilized UK app studio, ustwo, so DeepMind can’t claim app design credit either).

“Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking. It does not, by any measure, involve artificial intelligence or machine learning or other advanced technology. The benefits of the Streams App instead come from a very well-designed and user-friendly interface, backed up by solid infrastructure and data management that provides AKI alerts and contextual clinical information in a reliable, timely and secure manner,” Linklaters writes.

What DeepMind did bring to the project, and to its other NHS collaborations, is money and resources — providing its development resources free for the NHS at the point of use, and stating (when asked about its business model) that it would determine how much to charge the NHS for these app ‘innovations’ later.

Yet the commercial services the tech giant is providing to what are public sector organizations do not appear to have been put out to open tender.

Also notably excluded in the Linklaters’ audit: Any scrutiny of the project vis-a-vis competition law, public procurement law compliance with procurement rules, and any concerns relating to possible anticompetitive behavior.

The report does highlight one potentially problematic data retention issue for the current deployment of Streams, saying there is “currently no retention period for patient information on Streams” — meaning there is no process for deleting a patient’s medical history once it reaches a certain age.

“This means the information on Streams currently dates back eight years,” it notes, suggesting the Royal Free should probably set an upper age limit on the age of information contained in the system.

While Linklaters largely glosses over the chequered origins of the Streams project, the law firm does make a point of agreeing with the ICO that the original privacy impact assessment for the project “should have been completed in a more timely manner”.

It also describes it as “relatively thin given the scale of the project”.

Giving its response to the audit, health data privacy advocacy group MedConfidential — an early critic of the DeepMind data-sharing arrangement — is roundly unimpressed, writing: “The biggest question raised by the Information Commissioner and the National Data Guardian appears to be missing — instead, the report excludes a “historical review of issues arising prior to the date of our appointment”.

“The report claims the ‘vital interests’ (i.e. remaining alive) of patients is justification to protect against an “event [that] might only occur in the future or not occur at all”… The only ‘vital interest’ protected here is Google’s, and its desire to hoard medical records it was told were unlawfully collected. The vital interests of a hypothetical patient are not vital interests of an actual data subject (and the GDPR tests are demonstrably unmet).

“The ICO and NDG asked the Royal Free to justify the collection of 1.6 million patient records, and this legal opinion explicitly provides no answer to that question.”

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

Instapaper on pause in Europe to fix GDPR compliance “issue”

Remember Instapaper? The Pinterest-owned, read-it-later bookmarking service is taking a break in Europe — apparently while it works on achieving compliance with the region’s updated privacy framework, GDPR, which will start being applied from tomorrow.

Instapaper’s notification does not say how long the self-imposed outage will last.

The European Union’s General Data Protection Regulation updates the bloc’s privacy framework, most notably by bringing in supersized fines for data violations, which in the most serious cases can scale up to 4% of a company’s global annual turnover.

So it significantly ramps up the risk of, for example, having sloppy security, or consent flows that aren’t clear and specific enough (if indeed consent is the legal basis you’re relying on for processing people’s personal information).

That said, EU regulators are clearly going to tread softly on the enforcement front in the short term. And any major fines are only going to hit the most serious violations and violators — and only down the line when data protection authorities have received complaints and conducted thorough investigations.

So it’s not clear exactly why Instapaper believes it needs to pause its service to European users. It’s also had plenty of time to prepare to be compliant — given the new framework was agreed at the back end of 2015. We’ve reached out to Pinterest with questions and will update this story with any response.

In an exchange on Twitter, Pinterest product engineering manager Brian Donohue — who, prior to acquisition was Instapaper’s CEO — flagged that the product’s privacy policy “hasn’t been changed in several years”. But he declined to specify exactly what it feels its compliance issue is — saying only: “We’re actively working to resolve the issue.”

In a customer support email that we reviewed, the company also told one European user: “We’ve been advised to undergo an assessment of the Instapaper service to determine what, if any, changes may be appropriate but to restrict access to IP addresses in the EU as the best course of action.”

“We’re really sorry for any inconvenience, and we are actively working on bringing the service back online for residents in Europe,” it added.

The product’s privacy policy is one of the clearer T&Cs we’ve seen. It also states that users can already access “all your personally identifiable information that we collect online and maintain”, as well as saying people can “correct factual errors in your personally identifiable information by changing or deleting the erroneous information” — which, assuming those statements are true, looks pretty good for complying with portions of GDPR that are intended to give consumers more control over their personal data.

Instapaper also already lets users delete their accounts. And if they do that it specifies that “all account information and saved page data is deleted from the Instapaper service immediately” (though it also cautions that “deleted data may persist in backups and logs until they are deleted”).

In terms of what Instapaper does with users’ data, its privacy policy claims it does not share the information “with outside parties except to the extent necessary to accomplish Instapaper’s functionality”.

But it’s also not explicitly clear from the policy whether or not it’s passing information to its parent company Pinterest, for example, so perhaps it feels it needs to add more detail there.

Another possibility is Instapaper is working on compliance with GDPR’s data portability requirement. Though the service has offered exports options for years. But perhaps it feels these need to be more comprehensive.

As is inevitable ahead of a major regulatory change there’s a good deal of confusion about what exactly must be done to comply with the new rules. And that’s perhaps the best explanation for what’s going on with Instapaper’s pause.

Though, again, there’s plenty of official and detailed guidance from data protection agencies to help.

Unfortunately it’s also true that there’s a lot of unofficial and dubious quality advice from a cottage industry of self-styled ‘GDPR consultants’ that have sprung up with the intention of profiting off of the uncertainty. So — as ever — do your due diligence when it comes to the ‘experts’ you choose.

Zuckerberg will meet with European parliament in private next week

Who says privacy is dead? Facebook’s founder Mark Zuckerberg has agreed to take European parliamentarians’ questions about how his platform impacts the privacy of hundreds of millions of European citizens — but only behind closed doors. Where no one except a handful of carefully chosen MEPs will bear witness to what’s said.

The private meeting will take place on May 22 at 17.45CET in Brussels. After which the president of the European Parliament, Antonio Tajani, will hold a press conference to furnish the media with his version of events.

It’s just a shame that journalists are being blocked from being able to report on what actually goes on in the room.

And that members of the public won’t be able to form their own opinions about how Facebook’s founder responds to pressing questions about what Zuckerberg’s platform is doing to their privacy and their fundamental rights.

Because the doors are being closed to journalists and citizens.

Even the intended contents of the meeting is been glossed over in public — with the purpose of the chat being vaguely couched as “to clarify issues related to the use of personal data” in a statement by Tajani (below).

The impact of Facebook’s platform on “electoral processes in Europe” is the only discussion point that’s specifically flagged.

Given Zuckerberg has thrice denied requests from UK lawmakers to take questions about his platform in a public hearing we can only assume the company made the CEO’s appearance in front of EU parliamentarians conditional on the meeting being closed.

Zuckerberg did agree to public sessions with US lawmakers last month, following a major global privacy scandal related to user data and political ad targeting.

But evidently the company’s sense of accountability doesn’t travel very far. (Despite a set of ‘privacy principles’ that Facebook published with great fanfare at the start of the year — one of which reads: ‘We are accountable’. Albeit Facebook didn’t specify to who or what exactly Facebook feels accountable.)

We’ve reached out to Facebook to ask why Zuckerberg will not take European parliamentarians questions in a public hearing. And indeed whether Mark can find the time to hop on a train to London afterwards to testify before the DCMS committee’s enquiry into online disinformation — and will update this story with any response.

As Vera Jourova, the European commissioner for justice and consumers, put it in a tweet, it’s a pity the Facebook founder does not believe all Europeans deserve to know how their data is handled by his company. Just a select few, holding positions of elected office.

A pity or, well, a shame.

Safe to say, not all MEPs are happy with the arrangement…

But let’s at least be thankful that Zuckerberg has shown us, once again, how very much privacy matters — to him personally

Facebook faces fresh criticism over ad targeting of sensitive interests

Is Facebook trampling over laws that regulate the processing of sensitive categories of personal data by failing to ask people for their explicit consent before it makes sensitive inferences about their sex life, religion or political beliefs? Or is the company merely treading uncomfortably and unethically close to the line of the law?

An investigation by the Guardian and the Danish Broadcasting Corporation has found that Facebook’s platform allows advertisers to target users based on interests related to political beliefs, sexuality and religion — all categories that are marked out as sensitive information under current European data protection law.

And indeed under the incoming GDPR, which will apply across the bloc from May 25.

The joint investigation found Facebook’s platform had made sensitive inferences about users — allowing advertisers to target people based on inferred interests including communism, social democrats, Hinduism and Christianity. All of which would be classed as sensitive personal data under EU rules.

And while the platform offers some constraints on how advertisers can target people against sensitive interests — not allowing advertisers to exclude users based on a specific sensitive interest, for example (Facebook having previously run into trouble in the US for enabling discrimination via ethnic affinity-based targeting) — such controls are beside the point if you take the view that Facebook is legally required to ask for a user’s explicit consent to processing this kind of sensitive data up front, before making any inferences about a person.

Indeed, it’s very unlikely that any ad platform can put people into buckets with sensitive labels like ‘interested in social democrat issues’ or ‘likes communist pages’ or ‘attends gay events’ without asking them to let it do so first.

And Facebook is not asking first.

Facebook argues otherwise, of course — claiming that the information it gathers about people’s affinities/interests, even when they entail sensitive categories of information such as sexuality and religion, is not personal data.

In a response statement to the media investigation, a Facebook spokesperson told us:

Like other Internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as ‘Gay Pride’ because they have liked a Pride associated Page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality. People are able to manage their Ad Preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads. Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.

Expect Facebook’s argument to be tested in the courts — likely in the very near future.

As we’ve said before, the GDPR lawsuits are coming for the company, thanks to beefed up enforcement of EU privacy rules, with the regulation providing for fines as large as 4% of a company’s global turnover.

Facebook is not the only online people profiler, of course, but it’s a prime target for strategic litigation both because of its massive size and reach (and the resulting power over web users flowing from a dominant position in an attention-dominating category), but also on account of its nose-thumbing attitude to compliance with EU regulations thus far.

The company has faced a number of challenges and sanctions under existing EU privacy law — though for its operations outside the US it typically refuses to recognize any legal jurisdiction except corporate-friendly Ireland, where its international HQ is based.

And, from what we’ve seen so far, Facebook’s response to GDPR ‘compliance’ is no new leaf. Rather it looks like privacy-hostile business as usual; a continued attempt to leverage its size and power to force a self-serving interpretation of the law — bending rules to fit its existing business processes, rather than reconfiguring those processes to comply with the law.

The GDPR is one of the reasons why Facebook’s ad microtargeting empire is facing greater scrutiny now, with just weeks to go before civil society organizations are able to take advantage of fresh opportunities for strategic litigation allowed by the regulation.

“I’m a big fan of the GDPR. I really believe that it gives us — as the court in Strasbourg would say — effective and practical remedies,” law professor Mireille Hildebrandt tells us. “If we go and do it, of course. So we need a lot of public litigation, a lot of court cases to make the GDPR work but… I think there are more people moving into this.

“The GDPR created a market for these sort of law firms — and I think that’s excellent.”

But it’s not the only reason. Another reason why Facebook’s handling of personal data is attracting attention is the result of tenacious press investigations into how one controversial political consultancy, Cambridge Analytica, was able to gain such freewheeling access to Facebook users’ data — as a result of Facebook’s lax platform policies around data access — for, in that instance, political ad targeting purposes.

All of which eventually blew up into a major global privacy storm, this March, though criticism of Facebook’s privacy-hostile platform policies dates back more than a decade at this stage.

The Cambridge Analytica scandal at least brought Facebook CEO and founder Mark Zuckerberg in front of US lawmakers, facing questions about the extent of the personal information it gathers; what controls it offers users over their data; and how he thinks Internet companies should be regulated, to name a few. (Pro tip for politicians: You don’t need to ask companies how they’d like to be regulated.)

The Facebook founder has also finally agreed to meet EU lawmakers — though UK lawmakers’ calls have been ignored.

Zuckerberg should expect to be questioned very closely in Brussels about how his platform is impacting European’s fundamental rights.

Sensitive personal data needs explicit consent

Facebook infers affinities linked to individual users by collecting and processing interest signals their web activity generates, such as likes on Facebook Pages or what people look at when they’re browsing outside Facebook — off-site intel it gathers via an extensive network of social plug-ins and tracking pixels embedded on third party websites. (According to information released by Facebook to the UK parliament this week, during just one week of April this year its Like button appeared on 8.4M websites; the Share button appeared on 931,000 websites; and its tracking Pixels were running on 2.2M websites.)

But here’s the thing: Both the current and the incoming EU legal framework for data protection sets the bar for consent to processing so-called special category data equally high — at “explicit” consent.

What that means in practice is Facebook needs to seek and secure separate consents from users (such as via a dedicated pop-up) for collecting and processing this type of sensitive data.

The alternative is for it to rely on another special condition for processing this type of sensitive data. However the other conditions are pretty tightly drawn — relating to things like the public interest; or the vital interests of a data subject; or for purposes of “preventive or occupational medicine”.

None of which would appear to apply if, as Facebook is, you’re processing people’s sensitive personal information just to target them with ads.

Ahead of GDPR, Facebook has started asking users who have chosen to display political opinions and/or sexuality information on their profiles to explicitly consent to that data being public.

Though even there its actions are problematic, as it offers users a take it or leave it style ‘choice’ — saying they either remove the info entirely or leave it and therefore agree that Facebook can use it to target them with ads.

Yet EU law also requires that consent be freely given. It cannot be conditional on the provision of a service.

So Facebook’s bundling of service provisions and consent will also likely face legal challenges, as we’ve written before.

“They’ve tangled the use of their network for socialising with the profiling of users for advertising. Those are separate purposes. You can’t tangle them like they are doing in the GDPR,” says Michael Veale, a technology policy researcher at University College London, emphasizing that GDPR allows for a third option that Facebook isn’t offering users: Allowing them to keep sensitive data on their profile but that data not be used for targeted advertising.

“Facebook, I believe, is quite afraid of this third option,” he continues. “It goes back to the Congressional hearing: Zuckerberg said a lot that you can choose which of your friends every post can be shared with, through a little in-line button. But there’s no option there that says ‘do not share this with Facebook for the purposes of analysis’.”

Returning to how the company synthesizes sensitive personal affinities from Facebook users’ Likes and wider web browsing activity, Veale argues that EU law also does not recognize the kind of distinction Facebook is seeking to draw — i.e. between inferred affinities and personal data — and thus to try to redraw the law in its favor.

“Facebook say that the data is not correct, or self-declared, and therefore these provisions do not apply. Data does not have to be correct or accurate to be personal data under European law, and trigger the protections. Indeed, that’s why there is a ‘right to rectification’ — because incorrect data is not the exception but the norm,” he tells us.

“At the crux of Facebook’s challenge is that they are inferring what is arguably “special category” data (Article 9, GDPR) from non-special category data. In European law, this data includes race, sexuality, data about health, biometric data for the purposes of identification, and political opinions. One of the first things to note is that European law does not govern collection and use as distinct activities: Both are considered processing.

“The pan-European group of data protection regulators have recently confirmed in guidance that when you infer special category data, it is as if you collected it. For this to be lawful, you need a special reason, which for most companies is restricted to separate, explicit consent. This will be often different than the lawful basis for processing the personal data you used for inference, which might well be ‘legitimate interests’, which didn’t require consent. That’s ruled out if you’re processing one of these special categories.”

“The regulators even specifically give Facebook like inference as an example of inferring special category data, so there is little wiggle room here,” he adds, pointing to an example used by regulators of a study that combined Facebook Like data with “limited survey information” — and from which it was found that researchers could accurately predict a male user’s sexual orientation 88% of the time; a user’s ethnic origin 95% of the time; and whether a user was Christian or Muslim 82% of the time.

Which underlines why these rules exist — given the clear risk of breaches to human rights if big data platforms can just suck up sensitive personal data automatically, as a background process.

The overarching aim of GDPR is to give consumers greater control over their personal data not just to help people defend their rights but to foster greater trust in online services — and for that trust to be a mechanism for greasing the wheels of digital business. Which is pretty much the opposite approach to sucking up everything in the background and hoping your users don’t realize what you’re doing.

Veale also points out that under current EU law even an opinion on someone is their personal data… (per this Article 29 Working Party guidance, emphasis ours):

From the point of view of the nature of the information, the concept of personal data includes any sort of statements about a person. It covers “objective” information, such as the presence of a certain substance in one’s blood. It also includes “subjective” information, opinions or assessments. This latter sort of statements make up a considerable share of personal data processing in sectors such as banking, for the assessment of the reliability of borrowers (“Titius is a reliable borrower”), in insurance (“Titius is not expected to die soon”) or in employment (“Titius is a good worker and merits promotion”).

We put that specific point to Facebook — but at the time of writing we’re still waiting for a response. (Nor would Facebook provide a public response to several other questions we asked around what it’s doing here, preferring to limit its comment to the statement at the top of this post.)

Veale adds that the WP29 guidance has been upheld in recent CJEU cases such as Nowak — which he says emphasized that, for example, annotations on the side of an exam script are personal data.

He’s clear about what Facebook should be doing to comply with the law: “They should be asking for individuals’ explicit, separate consent for them to infer data including race, sexuality, health or political opinions. If people say no, they should be able to continue using Facebook as normal without these inferences being made on the back-end.”

“They need to tell individuals about what they are doing clearly and in plain language,” he adds. “Political opinions are just as protected here, and this is perhaps more interesting than race or sexuality.”

“They certainly should face legal challenges under the GDPR,” agrees Paul Bernal, senior lecturer in law at the University of East Anglia, who is also critical of how Facebook is processing sensitive personal information. “The affinity concept seems to be a pretty transparent attempt to avoid legal challenges, and one that ought to fail. The question is whether the regulators have the guts to make the point: It undermines a quite significant part of Facebook’s approach.”

“I think the reason they’re pushing this is that they think they’ll get away with it, partly because they think they’ve persuaded people that the problem is Cambridge Analytica, as rogues, rather than Facebook, as enablers and supporters. We need to be very clear about this: Cambridge Analytica are the symptom, Facebook is the disease,” he adds.

“I should also say, I think the distinction between ‘targeting’ being OK and ‘excluding’ not being OK is also mostly Facebook playing games, and trying to have their cake and eat it. It just invites gaming of the systems really.”

Facebook claims its core product is social media, rather than data-mining people to run a highly lucrative microtargeted advertising platform.

But if that’s true why then is it tangling its core social functions with its ad-targeting apparatus — and telling people they can’t have a social service unless they agree to interest-based advertising?

It could support a service with other types of advertising, which don’t depend on background surveillance that erodes users’ fundamental rights.  But it’s choosing not to offer that. All you can ‘choose’ is all or nothing. Not much of a choice.

Facebook telling people that if they want to opt out of its ad targeting they must delete their account is neither a route to obtain meaningful (and therefore lawful) consent — nor a very compelling approach to counter criticism that its real business is farming people.

The issues at stake here for Facebook, and for the shadowy background data-mining and brokering of the online ad targeting industry as a whole, are clearly far greater than any one data misuse scandal or any one category of sensitive data. But Facebook’s decision to retain people’s sensitive personal data for ad targeting without asking for consent up-front is a telling sign of something gone very wrong indeed.

If Facebook doesn’t feel confident asking its users whether what it’s doing with their personal data is okay or not, maybe it shouldn’t be doing it in the first place.

At very least it’s a failure of ethics. Even if the final judgement on Facebook’s self-serving interpretation of EU privacy rules will have to wait for the courts to decide.