I believe in free speech, but Minds makes me queasy

The email landed in my inbox just as the winds of bad press began to whip up around Mark Zuckerberg's ankles. It asked if I wanted to try Minds, the new blockchain-based social network, in the wake of #DeleteFacebook. The site is a social media platf…

Facebook has auto-enrolled users into a facial recognition test in Europe

Facebook users in Europe are reporting the company has begun testing its controversial facial recognition technology in the region.

Jimmy Nsubuga, a journalist at Metro, is among several European Facebook users who have said they’ve been notified by the company they are in its test bucket.

The company has previously said an opt-in option for facial recognition will be pushed out to all European users next month. It’s hoping to convince Europeans to voluntarily allow it to expand its use of the privacy-hostile tech — which was turned off in the bloc after regulatory pressure, back in 2012, when Facebook was using it for features such as automatically tagging users in photo uploads.

Under impending changes to its T&Cs — ostensibly to comply with the EU’s incoming GDPR data protection standard — the company has crafted a manipulative consent flow that tries to sell people on giving it their data; including filling in its own facial recognition blanks by convincing Europeans to agree to it grabbing and using their biometric data after all. 

Notably Facebook is not offering a voluntary opt-in to Europeans who find themselves in its facial recognition test bucket. Rather users are being automatically turned into its lab rats — and have to actively delve into the settings to say no.

In a notification to affected users, the company writes [emphasis ours]: “You control face recognition. This setting is on, but you can turn it off at any time, which applies to features we may add later.”

Not only is the tech turned on, but users who click through to the settings to try and turn it off will also find Facebook attempting to dissuade them from doing that — with manipulative examples of how the tech can “protect” them.

As another Facebook user who found herself enrolled in the test — journalist Jennifer Baker — points out, what it’s doing here is incredibly disingenuous because it’s using fear to try to manipulate people’s choices.

Under the EU’s incoming data protection framework Facebook will not be able to automatically opt users into facial recognition — it will have to convince people to switch the tech on themselves.

But the experiment it’s running here (without gaining individuals’ upfront consent) looks very much like a form of A/B testing — to see which of its disingenuous examples is best able to convince people to accept the highly privacy-hostile technology by voluntarily switching it on.

But given that Facebook controls the entire consent flow, and can rely on big data insights gleaned from its own platform (of 2BN+ users), this is not even remotely a fair fight.

Consent is being manipulated, not freely given. This is big data-powered mass manipulation of human decisions — i.e. until the ‘right’ answer (for Facebook’s business) is ‘selected’ by the user.

Data protection experts we spoke to earlier this week do not believe Facebook’s approach to consent will be legal under GDPR. Legal challenges are certain at this point.

But legal challenges also take time. And in the meanwhile Facebook users will be being manipulated into agreeing with things that align with the company’s data-harvesting business interests — and handing over their sensitive personal information without understanding the full implications.

It’s also not clear how many Facebook users are being auto-enrolled into this facial recognition test — we’ve put questions to it and will update this post with any reply.

Last month Facebook said it would be rolling out “a limited test of some of the additional choices we’ll ask people to make as part of GDPR”.

It also said it was “starting by asking only a small percentage of people so that we can be sure everything is working properly”, and further claimed: “[T]he changes we’re testing will let people choose whether to enable facial recognition, which has previously been unavailable in the EU.”

Facebook’s wording in those statements is very interesting — with no number put on how many people will be made into test subjects (though it is very clearly trying to play the experiment down; “limited test”, “small”) — so we simply don’t know how many Europeans are having their facial data processed by Facebook right now, without their upfront consent.

Nor do we know where in Europe all these test subjects are located. But it’s pretty likely the test contravenes even current EU data protection laws. (GDPR applies from May 25.)

Facebook’s description of its testing plan last month was also disingenuous as it implied users would get to choose to enable facial recognition. In fact, it’s just switching it on — saddling test subjects with the effort of opting out.

The company was likely hoping the test would not attract too much attention — given how much GDPR news is flowing through its PR channels, and how much attention the topic is generally sucking up — and we can see why now because it’s essentially reversed its 2012 decision to switch off facial recognition in Europe (made after the feature attracted so much blow-back), to grab as much data as it can while it can.

Millions of Europeans could be having their fundamental rights trampled on here, yet again. We just don’t know what the company actually means by “small”. (The EU has ~500M inhabitants — even 1%, a “small percentage”, of that would involve millions of people… )

Once again Facebook isn’t telling how many people it’s experimenting on.

Twitter doesn’t care that someone is building a bot army in Southeast Asia

Facebook’s lack of attention to how third parties are using its service to reach users ended up with CEO Mark Zuckerberg taking questions from Congressional committees. With that in mind, you’d think that others in the social media space might be more attentive than usual to potentially malicious actors on their platforms.

Twitter, however, is turning the other way and insisting all is normal in Southeast Asia, despite the emergence of thousands of bot-like accounts that have followed prominent users in the region en masse over the past month.

Scores of reporters and Twitter users with large followers — yours truly included — have noticed swarms of accounts with generic names, no profile photo, no bio and no tweets have followed them over the past month.

These accounts might be evidence of a new ‘bot farm’ — the creation of large numbers of accounts for sale or usage on-demand which Twitter has cracked down on — or the groundwork for more nefarious activities, it’s too early to tell.

In what appears to be the first regional Twitter bot campaign, a flood of suspicious new followers has been reported by users across Southeast Asia and beyond, including Thailand, Myanmar Cambodia, Hong Kong, China, Taiwan, Sri Lanka among other places.

While it is true that the new accounts have done nothing yet, the fact that a large number of newly-created accounts have popped up out of nowhere with the aim of following the region’s most influential voices should be enough to concern Twitter. Especially since this is Southeast Asia, a region where Facebook is beset with controversies — from its role inciting ethnic hatred in Myanmar, to allegedly assisting censors in Vietnam, witnessing users jailed for violating lese majeste in Thailand, and aiding the election of controversial Philippines leader Duterte.

Then there are governments themselves. Vietnam has pledged to build a cyber army to combat “wrongful views,” while other regimes in Southeast Asia have clamped down on social media users.

Despite that, Twitter isn’t commenting.

The U.S. company issued a no comment to TechCrunch when we asked for further information about this rush of new accounts, and what action Twitter will take.

A source close to the company suggested that the sudden accumulation of new followers is “a pretty standard sign-up, or onboarding, issue” that is down to new accounts selecting to follow the suggested accounts that Twitter proposes during the new account creation process.

Twitter is more than 10 years old, and since this is the first example of this happening in Southeast Asia that explanation already seems inadequate at face value. More generally, the dismissive approach seems particularly naive. Twitter should be looking into the issue more closely, even if for now the apparent bot army isn’t being put to use yet.

Facebook is considered to be the internet by many in Southeast Asia, and the social network is considerably more popular than Twitter in the region, but there remains a cause for concern here.

“If we’ve learned anything from the Facebook scandal, it’s that what can at first seem innocuous can be leveraged to quite insidious and invasive effect down the line,” Francis Wade, who recently published a book on violence in Myanmar, told the Financial Times this week. “That makes Twitter’s casual dismissal of concerns around this all the more unsettling.”

Senator pushes for stronger FTC oversight of Facebook

Senator Richard Blumenthal (D-CT) sent a letter to FTC Acting Chairwoman Maureen Ohlhausen today, encouraging the commission to consider evidence that Facebook may have violated a 2011 consent decree and to pursue regulations that will protect consum…

Facebook moves to shrink its legal liabilities under GDPR

Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale.

Late yesterday Reuters reported on a change incoming to Facebook’s T&Cs that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA.

With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming General Data Protection Regulation (GDPR) — which applies from May 25 — will not cover the ~1.5BN+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland).

The U.S. does not have a comparable data protection framework to GDPR. While the incoming EU framework substantially strengthens penalties for data protection violations, making the move a pretty logical one for Facebook’s lawyers thinking about how it can shrink its GDPR liabilities.

Reuters says Facebook confirmed the impending update to the T&Cs of non-EU international users, though the company played down the significance — repeating its claim that it will be making the same privacy “controls and settings” available everywhere. (Though, as experts have pointed out, this does not mean the same GDPR principles will be applied by Facebook everywhere.)

Critics have couched the T&Cs shift as regressive — arguing it’s a reduction in the level of privacy protection that would otherwise have applied for international users, thanks to GDPR. Although whether these EU privacy rights would really have been enforceable for non-Europeans is questionable.

At the time of writing Facebook had not responded to a request for comment on the change. Update: It’s now sent us the following statement — attributed to deputy chief global privacy officer, Stephen Deadman: “The GDPR and EU consumer law set out specific rules for terms and data policies which we have incorporated for EU users.  We have been clear that we are offering everyone who uses Facebook the same privacy protections, controls and settings, no matter where they live. These updates do not change that.” 

The company’s generally argument is that the EU law takes a prescriptive approach — which can make certain elements irrelevant for international users outside the bloc. It also claims it’s working on being more responsive to regional norms and local frameworks. (Which will presumably be music to the New Zealand privacy commissioner‘s ears, for one…)

According to Reuters the T&Cs shift will affect more than 70 per cent of Facebook’s 2BN+ users. As of December, Facebook had 239M users in the US and Canada; 370M in Europe; and 1.52BN users elsewhere.

The news agency also reports that Microsoft -owned LinkedIn is one of several other multinational companies planning to make the same data processing shift for international users — with LinkedIn’s new terms set to take effect on May 8, moving non-Europeans to contracts with the U.S.-based LinkedIn Corp.

In a statement to Reuters about the change LinkedIn also played it down, saying: “We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data.”

One interesting question is whether these sorts of data processing shifts could encourage regulators in international regions outside the EU to push for a similarly extraterritorial scope for their local privacy laws — or face their citizens’ data falling between the jurisdiction cracks via processing arrangements designed to shrink companies’ legal liabilities.

Another interesting question is how Facebook (or any other multinationals making the same shift) can be entirely sure it’s not risking violating any of its EU users’ fundamental rights if it accidentally misclassifies an individual as an non-EU international users — and processes their data via Facebook USA.

Keeping data processing processes properly segmented can be difficult. As can definitively identifying a user’s legal jurisdiction based on their location (if that’s even available). So while Facebook’s T&C change here looks largely intended to shrink its legal liabilities under GDPR, it’s possible the change will open up another front for individuals to pursue strategic litigation in the coming months.

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

Data experts on Facebook’s GDPR changes: Expect lawsuits

Make no mistake: Fresh battle lines are being drawn in the clash between data-mining tech giants and Internet users over people’s right to control their personal information and protect their privacy.

An update to European Union data protection rules next month — called the General Data Protection Regulation — is the catalyst for this next chapter in the global story of tech vs privacy.

A fairytale ending would remove that ugly ‘vs’ and replace it with an enlightened ‘+’. But there’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.

Payback is coming though. Balance is being reset. And the implications of not regulating what tech giants can do with people’s data has arguably never been clearer.

The exciting opportunity for startups is to skate to where the puck is going — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admit what the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.

 

More than just small print

Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.

The regulation is generally intended to strengthen Internet users’ control over their personal information, as we’ve explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.

This is why Facebook is not going gentle into a data processing goodnight.

Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…

This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.

But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.

One important point re: Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland (and thus within the EU); but not necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.

Users in North America will only benefit if Facebook chooses to apply the same standard everywhere. (And on that point the company has stayed exceedingly fuzzy.)

It has claimed it won’t give US and Canadian users second tier status vs the rest of the world where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.

Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.

Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.

The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.

That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.

 

Plus ça change?

Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.

Coincidentally many of these changes have been announced amid a major data mishandling scandal for Facebook, in which it’s been revealed that data on up to 87M users was passed to a political consultancy without their knowledge or consent.

It’s this scandal that led Zuckerberg to be perched on a booster cushion in full public view for two days last week, dodging awkward questions from US lawmakers about how his advertising business functions.

He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )

But whether Facebook’s business having enabled — in just one example — the clandestine psychological profiling of millions of Americans for political campaign purposes ends up being the final, final straw that catalyzes US lawmakers to agree their own version of GDPR is still tbc.

Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.

The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.

Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.

Other notable changes it has so far made public include consolidating settings onto a single screen vs the confusing nightmare Facebook has historically required users to navigate just to control what’s going on with their data (remember the company got a 2011 FTC sanction for “deceptive” privacy practices); rewording its T&Cs to make it more clear what information it’s collecting for what specific purpose; and — most recently — revealing a new consent review process whereby it will be asking all users (starting with EU users) whether they consent to specific uses of their data (such as processing for facial recognition purposes).

As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.

Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.

But, as we’ve pointed out before, concealment is not consent.

To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.

Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.

The company is risking a lot here.

In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.

Safe to say, EU lawmakers and NGOs are watching.

 

“Yes, they will be taken to court”

“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.

Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.

Data protection experts who TechCrunch spoke to suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.

“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”

“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent’,” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25).

“I also doubt whether FB Custom Audiences work correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”

That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.

Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.

However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.

Wishful thinking is not a reliable approach to legal compliance.

 

Fear and manipulation of highly sensitive data

Over many years of privacy-hostile operation, Facebook has shown it has a major appetite for even very sensitive data. And GDPR does not appear to have blunted that.

Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.

Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.

“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes’. “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products”. From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.

“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider’. This is really sensitive data we’re talking about.”

“The face recognition suggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you”.

“Also, the choices and defaults are not compatible with [GDPR] Article 25 on data protection by design and default nor Recital 32… If I say no to facial recognition it’s unclear if other users can continue to tag me.”

Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.

People who might not even be users of its product.

But if it does that it will be breaking the law. Yet Facebook does indeed profile non-users — despite Zuckerberg’s claims to Congress not to know about its shadow profiles. So the risk is clear.

It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.

New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.

Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.

“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.

Again, this whiffs a whole lot more than smells like GDPR compliance.

“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)

“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy’.”

“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”

“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.

US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.

His answers on this appeared to intentionally misconstrue what was being asked — presumably in a bid to mask the ugly reality of the true scope and depth of the surveillance apparatus he commands. (Sometimes with a few special ‘CEO privacy privileges’ thrown in — like being able to selectively retract just his own historical Facebook messages from conversations, ahead of bringing the feature to anyone else.)

‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.

 

Not even half the story

Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Requests give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.

Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.

Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.

If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)

There’s plenty of other information harvested from users that Facebook also intentionally fails to divulge via ‘Download your data’. And — to be clear — this isn’t a new problem either. The company has a very long history of blocking these type of requests.

In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.

“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”

“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.

Facebook wouldn’t comment publicly about this when we asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.

Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.

Which is not much of a moral defense even if the company believes its lawyers can sway judges with it. (Ultimately I wouldn’t fancy its chances; the EU’s top court has a robust record of defending fundamental rights.)

 

Think of the children…

What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.

And remember, some Facebook non-users will be children.

So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no.

TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.

“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”

So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.

Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.

Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.

“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of democratic processes… Though nobody can predict the future, it will require hard work.”

Login With Facebook data hijacked by JavaScript trackers

Facebook confirms to TechCrunch that it’s investigating a security research report that shows Facebook user data can be grabbed by third-party JavaScript trackers embedded on websites using Login With Facebook. The exploit lets these trackers gather a user’s data including name, email address, age range, gender, locale, and profile photo depending on what users originally provided to the website. It’s unclear what these trackers do with the data, but many of their parent companies including Tealium, AudienceStream, Lytics, and ProPS sell publisher monetization services based on collected user data.

The abusive scripts were found on 434 of the top 1 million websites including freelancer site Fiverr.com, camera seller B&H Photo And Video, and cloud database provider MongoDB. That’s according to Steven Englehardt and his colleagues at Freedom To Tinker, which is hosted by Princeton’s Center For Information Technology Policy.

Meanwhile, concert site BandsInTown was found to be passing Login With Facebook user data to embedded scripts on sites that install its Amplified advertising product. An invisible BandsInTown iframe would load on these sites, pulling in user data that was then accessible to embedded scripts. That let any malicious site using BandsInTown learn the identity of visitors. BandsInTown has now fixed this vulnerability.

TechCrunch is still awaiting a formal statement from Facebook beyond “We will look into this and get back to you.” After TechCrunch brough the issue to MongoDB’s attention this morning, it investigated and just provided this statement “We were unaware that a third-party technology was using a tracking script that collects parts of Facebook user data. We have identified the source of the script and shut it down.”

BandsInTown tells me “Bandsintown does not disclose unauthorized data to third parties and upon receiving an email from a researcher presenting a potential vulnerability in a script running on our ad platform, we quickly took the appropriate actions to resolve the issue in full.” Fiverr did not respond before press time.

 

The discovery of these data security flaws comes at a vulnerable time for Facebook. The company is trying to recover from the Cambridge Analytica scandal, CEO Mark Zuckerberg just testified before congress, and today it unveiled privacy updates to comply with Europe’s GDPR law. But Facebook’s recent API changes designed to safeguard user data didn’t prevent these exploits. And the situation shines more light on the little-understood ways Facebook users are tracked around the Internet, not just on its site.

“When a user grants a website access to their social media profile, they are not only trusting that website, but also third parties embedded on that site” writes Englehardt. This chart shows that what some trackers are pulling from users. Freedom To Tinker warned OnAudience about another security issue recently, leading it to stop collecting user info.

Facebook could have identified these trackers and prevented these exploits with sufficient API auditing. It’s currently ramping up API auditing as it hunts down other developers that might have improperly shared, sold, or used data like how Dr. Aleksandr Kogan’s app’s user data ended up in the hands of Cambridge Analytica. Facebook could also change its systems to prevent developers from taking an app-specific user ID and employing it to discover that person’s permanent overarching Facebook user ID.

Revelations like this are likely to beckon a bigger data backlash. Over the years, the public had became complacent about the ways their data was exploited without consent around the web. While it’s Facebook in the hot seat, other tech giants like Google rely on user data and operate developer platforms that can be tough to police. And news publishers, desperate to earn enough from ads to survive, often fall in with sketchy ad networks and trackers.

Zuckerberg makes an easy target because the Facebook founder is still the CEO, allowing critics and regulators to blame him for the social network’s failings. But any company playing fast and loose with user data should be sweating.