Instagram test lets public accounts remove followers

You've long had the option to remove Instagram followers if you keep your account private, but that's something of a compromise. Why do you have to shut yourself off from the outside world just to keep out a few undesirables? You might not have to…

It’s official: Brexit campaign broke the law — with social media’s help

The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.

Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave, it found.

Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.

The Electoral Commission’s investigation found evidence that BeLeave spent more than £675,000 with AggregateIQ under a common arrangement with Vote Leave. Yet the two campaigns had failed to disclose on their referendum spending returns that they had a common plan.

As the designated lead leave campaign, Vote Leave had a £7M spending limit under UK law. But via its joint spending with BeLeave the Commission determined it actually spent £7,449,079 — exceeding the legal spending limit by almost half a million pounds.

The June 2016 referendum in the UK resulted in a narrow 52:48 majority for the UK to leave the European Union. Two years on from the vote, the government has yet to agree a coherent policy strategy to move forward in negotiations with the EU, leaving businesses to suck up ongoing uncertainty and society and citizens to remain riven and divided.

Meanwhile, Facebook — whose platform played a key role in distributing referendum messaging — booked revenue of around $40.7BN in 2017 alone, reporting a full year profit of almost $16BN.

Back in May, long-time leave supporter and MEP, Nigel Farage, told CEO Mark Zuckerberg to his face in the European Parliament that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”.

The Electoral Commission’s investigation focused on funding and spending, and mainly concerned five payments made to Aggregate IQ in June 2016 — payments made for campaign services for the EU Referendum — by the three Brexit campaigns it investigated (the third being: Veterans for Britain).

Veterans for Britain’s spending return included a donation of £100,000 that was reported as a cash donation received and accepted on 20 May 2016. But the Commission found this was in fact a payment by Vote Leave to Aggregate IQ for services provided to Veterans for Britain in the final days of the EU Referendum campaign. The date was also incorrectly reported: It was actually paid by Vote Leave on 29 June 2016.

Despite the donation to a third Brexit campaign by the official Vote Leave campaign being for services provided by Aggregate IQ, which was also simultaneously providing services to Vote Leave, the Commission did not deem it to constitute joint working, writing: “[T]he evidence we have seen does not support the concern that the services were provided to Veterans for Britain as joint working with Vote Leave.”

It was, however, found to constitute an inaccurate donation report — another offense under the UK’s Political Parties, Elections and Referendums Act 2000.

The report details multiple issues with spending returns across the three campaigns. And the Commission has issued a series of fines to the three Brexit campaigns.

It has also referred two individuals — Vote Leave’s David Alan Halsall and BeLeave’s Darren Grimes — to the UK’s Metropolitan Police Service, which has the power to instigate a criminal investigation.

Early last year the Commission decided not to fully investigate Vote Leave’s spending but by October it says new information had emerged — which suggested “a pattern of action by Vote Leave” — so it revisited the assessment and reopened an investigation in November.

Its report also makes it clear that Vote Leave failed to co-operate with its investigation — including by failing to produce requested information and documents; by failing to provide representatives for interview; by ignoring deadlines to respond to formal investigation notices; and by objecting to the fact of the investigation, including suggesting it would judicially review the opening of the investigation.

Judging by the Commission’s account, Vote Leave seemingly did everything it could to try to thwart and delay the investigation — which is only reporting now, two years on from the Brexit vote and with mere months of negotiating time left before the end of the formal Article 50 exit notification process.

What’s crystal clear from this report is that following money and data trails takes time and painstaking investigation, which — given that, y’know, democracy is at stake — heavily bolsters the case for far more stringent regulations and transparency mechanisms to prevent powerful social media platforms from quietly absorbing politically motivated money and messaging without recognizing any responsibility to disclose the transactions, let alone carry out due diligence on who or what may be funding the political spending.

The political ad transparency measures that Facebook has announced so far come far too late for Brexit — or indeed, for the 2016 US presidential election when its platform carried and amplifiedKremlin funded divisive messaging which reached the eyeballs of hundreds of millions of US voters.

Last week the UK’s information commissioner, Elizabeth Denham, criticized Facebook for transparency and control failures relating to political ads on its platform, and also announced its intention to fine Facebook the maximum possible for breaches of UK data protection law relating to the Cambridge Analytica scandal, after it emerged that information on as many as 87 million Facebook users was extracted from its platform and passed to a controversial UK political consultancy without most people’s knowledge or consent.

She also published a series of policy recommendations around digital political campaigning — calling for an ethical pause on the use of personal data for political ad targeting, and warning that a troubling lack of transparency about how people’s data is being used risks undermining public trust in democracy

“Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned.

The Cambridge Analytica Facebook scandal is linked to the Brexit referendum via AggregateIQ — which was also a contractor for Cambridge Analytica, and also handled Facebook user information which the former company had improperly obtained, after paying a Cambridge University academic to use a quiz app to harvest people’s data and use it to create psychometric profiles for ad targeting.

The Electoral Commission says it was approached by Facebook during the Brexit campaign spending investigation with “some information about how Aggregate IQ used its services during the EU Referendum campaign”.

We’ve reached out to Facebook for comment on the report and will update this story with any response.

The Commission states that evidence from Facebook indicates that AggregateIQ used “identical target lists for Vote Leave and BeLeave ads”, although at least in one instance the BeLeave ads “were not run”.

It writes:

BeLeave’s ability to procure services from Aggregate IQ only resulted from the actions of Vote Leave, in providing those donations and arranging a separate donor for BeLeave. While BeLeave may have contributed its own design style and input, the services provided by Aggregate IQ to BeLeave used Vote Leave messaging, at the behest of BeLeave’s campaign director. It also appears to have had the benefit of Vote Leave data and/or data it obtained via online resources set up and provided to it by Vote Leave to target and distribute its campaign material. This is shown by evidence from Facebook that Aggregate IQ used identical target lists for Vote Leave and BeLeave ads, although the BeLeave ads were not run.

“We also asked for copies of the adverts Aggregate IQ placed for BeLeave, and for details of the reports he received from Aggregate IQ on their use. Mr Grimes replied to our questions,” it further notes in the report.

At the height of the referendum campaign — at a crucial moment when Vote Leave had reached its official spending limit — officials from the official leave campaign persuaded BeLeave’s only other donor, an individual called Anthony Clake, to allow it to funnel a donation from him directly to Aggregate IQ, who Vote Leave campaign director Dominic Cummins dubbed a bunch of “social media ninjas”.

The Commission writes:

On 11 June 2016 Mr Cummings wrote to Mr Clake saying that Vote Leave had all the money it could spend, and suggesting the following: “However, there is another organisation that could spend your money. Would you be willing to spend the 100k to some social media ninjas who could usefully spend it on behalf of this organisation? I am very confident it would be well spent in the final crucial 5 days. Obviously it would be entirely legal. (sic)”

Mr Clake asked about this organisation. Mr Cummings replied as follows: “the social media ninjas are based in canada – they are extremely good. You would send your money directly to them. the organisation that would legally register the donation is a permitted participant called BeLeave, a “young people’s organisation”. happy to talk it through on the phone though in principle nothing is required from you but to wire money to a bank account if you’re happy to take my word for it. (sic)

Mr Clake then emailed Mr Grimes to offer a donation to BeLeave. He specified that this donation would made “via the AIQ account.”

And while the Commission says it found evidence that Grimes and others from BeLeave had “significant input into the look and design of the BeLeave adverts produced by Aggregate IQ”, it also determined that Vote Leave messaging was “influential in their strategy and design” — hence its determination of a common plan between the two campaigns. Aggregate IQ was the vehicle used by Vote Leave to breech its campaign spending cap.

Providing examples of the collaboration it found between the two campaigns, the Commission quotes internal BeLeave correspondence — including an instruction from Grimes to: “Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”.

It writes:

On 15 June 2016 Mr Grimes told other BeLeave Board members and Aggregate IQ that BeLeave’s ads needed to be: “an effective way of pushing our more liberal and progressive message to an audience which is perhaps not as receptive to Vote Leave’s messaging.”

On 17 June 2016 Mr Grimes told other BeLeave Board members: “So as soon as we can go live. Advertising should be back on tomorrow and normal operating as of Sunday. I’d like to make sure we have loads of scheduled tweets and Facebook status. Post all of those blogs including Shahmirs [aka Shahmir Sami; who became a BeLeave whistleblower], use favstar to check out and repost our best performing tweets. Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”

Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.

ACLU calls for a moratorium on government use of facial recognition technologies

Technology executives are pleading with the government to give them guidance on how to use facial recognition technologies, and now the American Civil Liberties Union is weighing in.

On the heels of a Microsoft statement asking for the federal government to weigh in on the technology, the ACLU has called for a moratorium on the use of the technology by government agencies.

“Congress should take immediate action to put the brakes on this technology with a moratorium on its use, given that it has not been fully debated and its use has never been explicitly authorized,” said Neema Singh Guliani, ACLU legislative counsel, in a statement. “And companies like Microsoft, Amazon, and others should be heeding the calls from the public, employees, and shareholders to stop selling face surveillance technology to governments.”

In May the ACLU released a report on Amazon’s sale of facial recognition technology to different law enforcement agencies. And in June the civil liberties group pressed the company to stop selling the technology. One contract, with the Orlando Police Department, was suspended and then renewed after the uproar.

Meanwhile, Google employees revolted over their company’s work with the government on facial recognition tech… and Microsoft had problems of its own after reports surfaced of the work that the company was doing with the U.S. Immigration and Customs Enforcement service.

Some organizations are already working to regulate how facial recognition technologies are used. At MIT, Joy Buolamwini has created the Algorithmic Justice League, which is pushing a pledge that companies working with the technology can agree to as they work on the tech.

That pledge includes commitments to value human life and dignity, including the refusal to help develop lethal autonomous vehicles or equipping law enforcement with facial analysis products.

Lawmakers ask FTC to investigate smart TV data collection

Senators Edward Markey (D-MA) and Richard Blumenthal (D-CT) sent a letter to the FTC this week requesting that the agency open an investigation into how smart TVs collect consumer viewing data and whether manufacturers disclose that practice adequate…

As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation

Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.

Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.

Smith posits this nightmare scenario:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).

In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.

But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.

The fact is, something does, indeed, need to be done.

As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”

All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.

In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.

There’s really no “nice” way to acknowledge these things.

Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.

Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.

As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”

While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.

That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.

But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.

We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

Facebook under fresh political pressure as UK watchdog calls for “ethical pause” of ad ops

The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (£500k) under the country’s 1998 data protection regime for breaches related to the Cambridge Analytica data misuse scandal.

But that’s just the tip of the regulatory missiles now being directed at the platform and its ad-targeting methods — and indeed, at the wider big data economy’s corrosive undermining of individuals’ rights.

Alongside yesterday’s update on its investigation into the Facebook-Cambridge Analytica data scandal, the Information Commissioner’s Office (ICO) has published a policy report — entitled Democracy Disrupted? Personal information and political influence — in which it sets out a series of policy recommendations related to how personal information is used in modern political campaigns.

In the report it calls directly for an “ethical pause” around the use of microtargeting ad tools for political campaigning — to “allow the key players — government, parliament, regulators, political parties, online platforms and citizens — to reflect on their responsibilities in respect of the use of personal information in the era of big data before there is a greater expansion in the use of new technologies”.

The watchdog writes [emphasis ours]:

Rapid social and technological developments in the use of big data mean that there is limited knowledge of – or transparency around – the ‘behind the scenes’ data processing techniques (including algorithms, analysis, data matching and profiling) being used by organisations and businesses to micro-target individuals. What is clear is that these tools can have a significant impact on people’s privacy. It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld. When the purpose for using these techniques is related to the democratic process, the case for high standards of transparency is very strong.

Engagement with the electorate is vital to the democratic process; it is therefore understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing. Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default. This could have a damaging long-term effect on the fabric of our democracy and political life.

It also flags a number of specific concerns attached to Facebook’s platform and its impact upon people’s rights and democratic processes — some of which are sparking fresh regulatory investigations into the company’s business practices.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” it writes. “Whilst these concerns about Facebook’s advertising model exist generally in relation to its commercial use, they are heightened when these tools are used for political campaigning. Facebook’s use of relevant interest categories for targeted advertising and it’s, Partner Categories Service are also cause for concern. Although the service has ceased in the EU, the ICO will be looking into both of these areas, and in the case of partner categories, commencing a new, broader investigation.”

The ICO says its discussions with Facebook for this report focused on “the level of transparency around how Facebook user data and third party data is being used to target users, and the controls available to users over the adverts they see”.

Among the concerns it raises about what it dubs Facebook’s “very complex” online targeting advertising model are [emphasis ours]:

Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.

The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.

The company has been criticized for years for confusing and complex privacy controls. But during the investigation, the ICO says it was also not provided with “satisfactory information” from the company to understand the process it uses for determining what interest segments individuals are placed in for ad targeting purposes.

“Whilst Facebook confirmed that the content of users’ posts were not used to derive categories or target ads, it was difficult to understand how the different ‘signals’, as Facebook called them, built up to place individuals into categories,” it writes.

Similar complaints of foot-dragging responses to information requests related to political ads on its platform have also been directed at Facebook by a parliamentary committee that’s running an inquiry into fake news and online disinformation — and in April the chair of the committee accused Facebook of “a pattern of evasive behavior”.

So the ICO is not alone in feeling that Facebook’s responses to requests for specific information have lacked the specific information being sought. (CEO Mark Zuckerberg also annoyed the European Parliament with highly evasive responses to their highly detailed questions this Spring.)

Meanwhile, a European media investigation in May found that Facebook’s platform allows advertisers to target individuals based on interests related to sensitive categories such as political beliefs, sexuality and religion — which are categories that are marked out as sensitive information under regional data protection law, suggesting such targeting is legally problematic.

The investigation found that Facebook’s platform enables this type of ad targeting in the EU by making sensitive inferences about users — inferred interests including communism, social democrats, Hinduism and Christianity. And its defense against charges that what it’s doing breaks regional law is that inferred interests are not personal data.

However the ICO report sends a very chill wind rattling towards that fig leaf, noting “there is a concern that by placing users into categories, Facebook have been processing sensitive personal information – and, in particular, data about political opinions”.

It further writes [emphasis ours]:

Facebook made clear to the ICO that it does ‘not target advertising to EU users on the basis of sensitive personal data’… The ICO accepts that indicating a person is interested in a topic is not the same as formally placing them within a special personal information category. However, a risk clearly exists that advertisers will use core audience categories in a way that does seek to target individuals based on sensitive personal information. In the context of this investigation, the ICO is particularly concerned that such categories can be used for political advertising.

The ICO believes that this is part of a broader issue about the processing of personal information by online platforms in the use of targeted advertising; this goes beyond political advertising. It is clear from academic research conducted by the University of Madrid on this topic that a significant privacy risk can arise. For example, advertisers were using these categories to target individuals with the assumption that they are, for example, homosexual. Therefore, the effect was that individuals were being singled out and targeted on the basis of their sexuality. This is deeply concerning, and it is the ICO’s intention as a concerned authority under the GDPR to work via the one-stop-shop system with the Irish Data Protection Commission to see if there is scope to undertake a wider examination of online platforms’ use of special categories of data in their targeted advertising models.

So, essentially, the regulator is saying it will work with other EU data protection authorities to push for a wider, structural investigation of online ad targeting platforms which put users into categories based on inferred interests — and certainly where those platforms are allowing targeting against special categories of data (such as data related to racial or ethnic origin, political opinions, religious beliefs, health data, sexuality).

Another concern the ICO raises that’s specifically attached to Facebook’s business is transparency around its so-called “partner categories” service — an option for advertisers that allows them to use third party data (i.e. personal data collected by third party data brokers) to create custom audiences on its platform.

In March, ahead of a major update to the EU’s data protection framework, Facebook announced it would be “winding down” this service down over the next six months.

But the ICO is going to investigate it anyway.

“A preliminary investigation of the service has raised significant concerns about transparency of use of the [partner categories] service for political advertising and wider concerns about the legal basis for the service, including Facebook’s claim that it is acting only as a processor for the third-party data providers,” it writes. “Facebook announced in March 2018 that it will be winding down this service over a six-month period, and we understand that it has already ceased in the EU. The ICO has also commenced a broader investigation into the service under the DPA 1998 (which will be concluded at a later date) as we believe it is in the public interest to do so.”

In conclusion on Facebook the regulator asserts the company has not been “sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign”.

“Individuals can opt out of particular interests, and that is likely to reduce the number of ads they receive on political issues, but it will not completely block them,” it points out. “These concerns about transparency lie at the core of our investigation. Whilst these concerns about Facebook’s advertising model exist in relation in general terms and its use in the commercial sphere, the concerns are heightened when these tools are used for political campaigning.”

The regulator also looked at political campaign use of three other online ad platforms — Google, Twitter and Snapchat — although Facebook gets the lion’s share of its attention in the report given the platform has also attracted the lion’s share of UK political parties’ digital spending. (“Figures from the Electoral Commission show that the political parties spent £3.2 million on direct Facebook advertising during the 2017 general election,” it notes. “This was up from £1.3 million during the 2015 general election. By contrast, the political parties spent £1 million on Google advertising.”)

The ICO is recommending that all online platforms which provide advertising services to political parties and campaigns should include experts within the sales support team who can provide political parties and campaigns with “specific advice on transparency and accountability in relation to how data is used to target users”.

“Social media companies have a responsibility to act as information fiduciaries, as citizens increasingly live their lives online,” it further writes.

It also says it will work with the European Data Protection Board, and the relevant lead data protection authorities in the region, to ensure that online platforms comply with the EU’s new data protection framework (GDPR) — and specifically to ensure that users “understand how personal information is processed in the targeted advertising model, and that effective controls are available”.

“This includes greater transparency in relation to the privacy settings, and the design and prominence of privacy notices,” it warns.

Facebook’s use of dark pattern design and A/B tested social engineering to obtain user consent for processing their data at the same time as obfuscating its intentions for people’s data has been a long-standing criticism of the company — but one which the ICO is here signaling is very much on the regulatory radar in the EU.

So expecting new laws — as well as lots more GDPR lawsuits — seems prudent.

The regulator is also pushing for all four online platforms to “urgently roll out planned transparency features in relation to political advertising to the UK” — in consultation with both relevant domestic oversight bodies (the ICO and the Electoral Commission).

In Facebook’s case, it has been developing policies around political ad transparency — amid a series of related data scandals in recent years, which have ramped up political pressure on the company. But self-regulation looks very unlikely to go far enough (or fast enough) to fix the real risks now being raised at the highest political levels.

“We opened this report by asking whether democracy has been disrupted by the use of data analytics and new technologies. Throughout this investigation, we have seen evidence that it is beginning to have a profound effect whereby information asymmetry between different groups of voters is beginning to emerge,” writes the ICO. “We are a now at a crucial juncture where trust and confidence in the integrity of our democratic process risks being undermined if an ethical pause is not taken. The recommendations made in this report — if effectively implemented — will change the behaviour and compliance of all the actors in the political campaigning space.”

Another key policy recommendation the ICO is making is to urge the UK government to legislate “at the earliest opportunity” to introduce a statutory Code of Practice under the country’s new data protection law for the use of personal information in political campaigns.

The report also essentially calls out all the UK’s political parties for data protection failures — a universal problem that’s very evidently being supercharged by the rise of accessible and powerful online platforms which have enabled political parties to combine (and thus enrich) voter databases they are legally entitled to with all sorts of additional online intelligence that’s been harvested by the likes of Facebook and other major data brokers.

Hence the ICO’s concern about “developing a system of voter surveillance by default”. And why she’s pushing for online platforms to “act as information fiduciaries”.

Or, in other words, without exercising great responsibility around people’s information, online ad platforms like Facebook risk becoming the enabling layer that breaks democracy and shatters civic society.

Particular concerns being attached by the ICO to political parties’ activities include: The purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence; a lack of fair processing; and use of third party data analytics companies with insufficient checks around consent. And the regulator says it has several related investigations ongoing.

In March, the information commissioner, Elizabeth Denham, foreshadowed the conclusions in this report, telling a UK parliamentary committee she would be recommending a code of conduct for political use of personal data, and pushing for increased transparency around how and where people’s data is flowing — telling MPs: “We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.”

The ICO says now that it will work closely with government to determine the scope of the Code. It also wants the government to conduct a review of regulatory gaps.

We’ve reached out to the Cabinet Office for a government response to the ICO’s recommendations. Update: A Cabinet Office spokesperson directed us to the Department for Digital, Culture, Media and Sport — and a DCMS spokesman told us the government will wait to review the full ICO report once it’s completed before setting out a formal response.

A Facebook spokesman declined to answer specific questions related to the report — instead sending us this short statement, attributed to its chief privacy officer, Erin Egan: “As we have said before, we should have done more to investigate claims about Cambridge Analytica and take action in 2015. We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We’re reviewing the report and will respond to the ICO soon.”

Here’s the ICO’s summary of its ten policy recommendations:

1) The political parties must work with the ICO, the Cabinet Office and the Electoral Commission to identify and implement a cross-party solution to improve transparency around the use of commonly held data.

2) The ICO will work with the Electoral Commission, Cabinet Office and the political parties to launch a version of its successful Your Data Matters campaign before the next General Election. The aim will be to increase transparency and build trust and confidence amongst 5 the electorate on how their personal data is being used during political campaigns.

3) Political parties need to apply due diligence when sourcing personal information from third party organisations, including data brokers, to ensure the appropriate consent has been sought from the individuals concerned and that individuals are effectively informed in line with transparency requirements under the GDPR. This should form part of the data protection impact assessments conducted by political parties.

4) The Government should legislate at the earliest opportunity to introduce a statutory Code of Practice under the DPA2018 for the use of personal information in political campaigns. The ICO will work closely with Government to determine the scope of the Code.

5) It should be a requirement that third party audits be carried out after referendum campaigns are concluded to ensure personal data held by the campaign is deleted, or if it has been shared, the appropriate consent has been obtained.

6) The Centre for Data Ethics and Innovation should work with the ICO, the Electoral Commission to conduct an ethical debate in the form of a citizen jury to understand further the impact of new and developing technologies and the use of data analytics in political campaigns.

7) All online platforms providing advertising services to political parties and campaigns should include expertise within the sales support team who can provide political parties and campaigns with specific advice on transparency and accountability in relation to how data is used to target users.

8) The ICO will work with the European Data Protection Board (EDPB), and the relevant lead Data Protection Authorities, to ensure online platforms’ compliance with the GDPR – that users understand how personal information is processed in the targeted advertising model and that effective controls are available. This includes greater transparency in relation to the privacy settings and the design and prominence of privacy notices.

9) All of the platforms covered in this report should urgently roll out planned transparency features in relation to political advertising to the UK. This should include consultation and evaluation of these tools by the ICO and the Electoral Commission.

10)The Government should conduct a review of the regulatory gaps in relation to content and provenance and jurisdictional scope of political advertising online. This should include consideration of requirements for digital political advertising to be archived in an open data repository to enable scrutiny and analysis of the data.

Lawmakers Question Apple and Google on Personal Data Collection Policies

The House Energy and Commerce Committee this morning sent letters to Apple and Google parent company Alphabet to ask 16 multi-part questions about how the companies handle customer data, according to a press release.

The letter to Apple [PDF] cites recent media reports as the reason for the inquiry, referencing November news suggesting Android collects extensive user location data even when location services are disabled along with reports that smartphones collect and store “non-triggered” audio data from user conversations near a smartphone to hear a trigger phrase such as “Ok Google” or “Hey Siri.”



While both of these reports were focused on Android, the House wants to know if Apple has similar practices, collecting location data when location services, WiFi, and Bluetooth are disabled or gathering “non-triggered” voice data from customers and sharing it with third-party sources.

A summary of some of the questions are below, with the complete list available in a PDF of the letter shared by the committee.

  • When an iPhone lacks a SIM card (or if WiFi, Bluetooth, or location services are disabled), is that phone programmed to collect and locally store information through a different data-collection capability, if available, regarding: nearby cellular towers, nearby WiFi hotspots, or nearby Bluetooth beacons? If yes, are iPhones without SIM cards (or with WiFi/Bluetooth/location services disabled) programmed to send this locally stored information to Apple?
  • If a consumer using an iPhone has disabled location services for multiple apps, but then reenables location services for one app, are iPhones programmed to reenable location services for all apps on that phone?
  • Do Apple’s iPhone devices have the capability to listen to consumers without a clear, unambiguous audio trigger? If yes, how is this data used by Apple? What access to this data does Apple give to third parties?
  • Do Apple’s iPhone devices collect audio recordings of users without consent?
  • Could Apple control or limit the data collected by third-party apps available on the App Store? Please provide a list of all data elements that can be collected by a third-party app downloaded on an iPhone device about a user.
  • Apple recently announced a partnership with RapidSOS for enhanced location services for 911 calls. What role will RapidSOS serve in the sharing and retention of this information?
  • What limits does Apple place on third-party developers’ ability to collect information from users’ or from users’ devices? Please describe in detail changes made in June 2017 from prior policies.

That last question references App Store Guidelines that Apple updated in June to restrict apps from from collecting user data to build advertising profiles or contact databases. The new rules also prohibit apps from harvesting data from an iPhone user’s contacts to create contact databases.

The letter goes on to request Apple’s policies for data collection via the microphone, Bluetooth, WiFi, and cellular networking capabilities, along with Apple’s policies pertaining to third-party access and use of data collected by the microphone. It also asks whether Apple has suspended or banned companies for violating its App Store rules, requesting specific examples and whether users had been notified their data was misused when the developer was banned.

The House Energy and Commerce Committee asks Apple to make arrangements to provide a briefing on the topics listed in the letter, but it does not provide a timeline for when Apple needs to respond. Apple generally responds to these requests in a prompt manner, however.

Apple maintains stricter and more transparent privacy policies than companies like Google and Facebook, with a dedicated privacy website that explains its approach to privacy, outlines tools available to customers to protect their privacy, and details government data requests.

Privacy is at the forefront of many features Apple implements, and the company is careful to always outline the privacy protections that have been added when introducing new functionality. When introducing new Photos features in iOS 12 that allow for improved search and sharing suggestions, for example, Apple was quick to point out that these features are all on-device.

Apple executives have said several times that Apple customers are not the company’s product, and Apple CEO Tim Cook has maintained that privacy is a fundamental human right. From a recent interview:

To me, and we feel this very deeply, we think privacy is a fundamental human right. So that is the angle that we look at it. Privacy from an American point of view is one of these key civil liberties that define what it is to be American.

Cook has also said that people are not fully aware of how their data is being used and who has access to it, a problem that “needs to be addressed.”

“The ability of anyone to know what you’ve been browsing about for years, who your contacts are, who their contacts are, things you like and dislike and every intimate detail of your life – from my own point of view it shouldn’t exist.”

Apple is continually introducing new privacy tools and protections for customers. Both macOS Mojave and iOS 12 include security and privacy improvements designed to better protect users, with additional tracking protection in Safari on both operating systems and extended privacy protections in Mojave.

Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Tag: privacy

Discuss this article in our forums