Truecaller makes first acquisition to build out payment and financial services in India

Sweden’s Truecaller started out life as a service that screens calls and messages to weed out spammers. In recent times the company has switched its focus to India, its largest market based on users, adding services that include payments to make it more useful. Now Truecaller is putting even more weight behind its India push after it announced its first acquisition, mobile payment service Chillr.

The vision is to go deeper into mobile payments and associated services to turn Truecaller into a utility that goes beyond just handling messages and calls, particularly payments — a space which WhatsApp is preparing to enter in India.

Truecaller doesn’t have WhatsApp -like scale — few companies can match 200 million active users in Indua, but it did recently disclose that it has 100 million daily active users worldwide, while India is its largest country with 150 million registered users.

Truecaller has raised over $90 million from investors to date, according to Crunchbase. TechCrunch reported in 2015 that it was in talks to raise $100 million at a valuation of around $1 billion, but a deal never happened. Truecaller has instead raised capital from Swedish investment firm Zenith. Chillr, which offer payment services between over 50 banks, had raised $7.5 million from the likes of Blume Ventures and Sequoia Capital.

Truecaller isn’t disclosing how much it has paid for the deal, but it said that Chillr’s entire team of 45 people will move over and the Chillr service will be phased out. In addition, Chillr CEO Sony Joy will become vice president of Truecaller Pay, running that India-based payment business which will inherit Chillr’s core features.

“We’ve acquire a company that is known for innovation and leading this space in terms of building a fantastic product,” Truecaller co-founder and CSO Nami Zarringhalam told TechCrunch in an interview.

Zarringhalam said the Truecaller team met with Chillr as part of an effort to reach out to partners to build out an ecosystem of third-party services, but quickly realized there was potential to come together.

“We realized we shared synergies in thought processes for caring for the customer and user experience,” he added, explaining that Joy and his Chillr team will “take over the vision of execution of Truecaller Pay.”

Truecaller added payments in India last year

Joy told TechCrunch that he envisages developing Truecaller Pay into one of India’s top three payment apps over the next two years.

Already, the service supports peer-to-peer payments following a partnership with ICICI Bank, but there are plans to layer on additional services from third parties. That could include integrations to provide services such as loans, financing, micro-insurance and more.

Joy pointed out that India’s banking push has seen many people in the country sign up for at least one account, so now the challenge is not necessarily getting banked but instead getting access to the right services. Thanks to gathering information through payments and other customer data, Truecaller could, with permission from users, share data with financial services companies to give users access to services that wouldn’t be able to access otherwise.

“Most citizens have a bank account (in each household), now being underserved is more to do with access to other services,” he explained.

Joy added that Truecaller is aiming to layer in value added services over its SMS capabilities, digging into the fact that SMS remains a key communication and information channel in India. For example, helping users pay for items confirmed via SMS, or pay for an order which is tracked via SMS.

The development of the service in India has made it look from the outside that the company is splitting into two, a product localized for India and another for the rest of the world. However, Zarringhalam said that the company plans to replicate its approach — payments and more — in other markets.

“It could be based on acquisitions or partners, time will tell,” he said. “But our plan is to develop this for all markers where our market penetration is high and the market dynamics are right.”

Truecaller has raised over $90 million from investors to date, according to Crunchbase. TechCrunch reported in 2015 that it was in talks to raise $100 million at a valuation of around $1 billion, but a deal never happened. Truecaller has instead raised capital from Swedish investment firm Zenith.

It’s OK to leave Facebook

The slow-motion privacy train wreck that is Facebook has many users, perhaps you, thinking about leaving or at least changing the way you use the social network. Fortunately for everyone but Mark Zuckerberg, it’s not nearly has hard to leave as it once was. The main thing to remember is that social media is for you to use, and not vice versa.

Social media has now become such an ordinary part of modern life that, rather than have it define our interactions, we can choose how we engage with it. That’s great! It means that everyone is free to design their own experience, taking from it what they need instead of participating to an extent dictated by social norms or the progress of technology.

Here’s why now is a better time than ever to take control of your social media experience. I’m going to focus on Facebook, but much of this is applicable to Instagram, Twitter, LinkedIn, and other networks as well.

Stalled innovation means a stable product

The Facebooks of 2005, 2010, and 2015 were very different things and existed in very different environments. Among other things over that eventful ten-year period, mobile and fixed broadband exploded in capabilities and popularity; the modern world of web-native platforms matured and became secure and reliable; phones went from dumb to smart to, for many, their primary computer; and internet-based companies like Google, Facebook, and Amazon graduated from niche players to embrace and dominate the world at large.

It’s been a transformative period for lots of reasons and in lots of ways. And products and services that have been there the whole time have been transformed almost continuously. You’d probably be surprised at what they looked like and how limited they were not long ago. Many things we take for granted today online were invented and popularized just in the last decade.

But the last few years have seen drastically diminished returns. Where Facebook used to add features regularly that made you rely on it more and more, now it is desperately working to find ways to keep people online. Why is that?

Well, we just sort of reached the limit of what a platform like Facebook can or should do, that’s all! Nothing wrong with that.

It’s like improving a car — no matter how many features you add or engines you swap in, it’ll always be a car. Cars are useful things, and so is Facebook. But a car isn’t a truck, or a bike, or an apple, and Facebook isn’t (for example) a broadcast medium, a place for building strong connections, or a VR platform (as hard as they’re trying).

The things that Facebook does well and that we have all found so useful — sharing news and photos with friends, organizing events, getting and staying in contact with people — haven’t changed considerably in a long time. And as the novelty has worn off those things, we naturally engage in them less frequently and in ways that make more sense to us.

Facebook has become the platform it was intended to be all along, with its own strengths and weaknesses, and its failure to advance beyond that isn’t a bad thing. In fact, I think stability is a good thing. Once you know what something is and will be, you can make an informed choice about it.

The downsides have become obvious

Every technology has its naysayers, and social media was no exception — I was and to some extent remain one myself. But over the years of changes these platforms have gone through, some fears were shown to be unfounded or old-fashioned.

The idea that people would cease interacting in the “real world” and live in their devices has played out differently from how we expected, surely; trying to instruct the next generation on the proper way to communicate with each other has never worked out well for the olds. And if you told someone in 2007 that foreign election interference would be as much a worry for Facebook as oversharing and privacy problems, you might be met with incredulous looks.

Other downsides were for the most part unforeseen. The development of the bubble or echo chamber, for instance, would have been difficult to predict when our social media systems weren’t also our news-gathering systems. And the phenomenon of seeing only the highlights of others’ lives posted online, leading to self esteem issues in those who view them with envy, is an interesting but sad development.

Whether some risk inherent to social media was predicted or not, or proven or not, people now take such risks seriously. The ideas that one can spend too much time on social networks, or suffer deleterious effects from them, or feel real pain or turmoil because of interactions on them are accepted (though sadly not always without question).

Taking the downsides of something as seriously as the upsides is another indicator of the maturity of that thing, at least in terms of how society interacts with it. When the hype cycle winds down, realistic judgment takes its place and the full complexities of a relationship like the one between people and social media can be examined without interference.

Between the stability of social media’s capabilities and the realism with which those capabilities are now being considered, choice is no longer arbitrary or absolute. Your engagement is not being determined by them any more.

Social media has become a rich set of personal choices

Your experience may differ from mine here, but I feel that in those days of innovation among social networks your participation was more of a binary. You were either on or you were off.

The way they were advancing and changing defined how you engaged with them by adding and opting you into features, or changing layouts and algorithms. It was hard to really choose how to engage in any meaningful way when the sands were shifting under your feet (or rather, fingertips). Every few months brought new features and toys and apps, and you sort of had to be there, using them as proscribed, or risk being left behind. So people either kept up or voluntarily stayed off.

Now all that has changed. The ground rules are set, and have been for long enough that there is no risk that if you left for a few months and come back, things would be drastically different.

As social networks have become stable tools used by billions, any combination or style of engagement with them has become inherently valid.

Your choice to engage with Facebook or Instagram does not boil down to simply whether you are on it or not any more, and the acceptance of social media as a platform for expression and creation as well as socializing means that however you use it or present on it is natural and no longer (for the most part) subject to judgment.

That extends from choosing to make it an indispensable tool in your everyday life to quitting and not engaging at all. There’s no longer an expectation that the former is how a person must use social media, and there is no longer a stigma to the latter of disconnectedness or Luddism.

You and I are different people. We live in different places, read different books, enjoy different music. We drive different cars, prefer different restaurants, like different drinks. Why should we be the same in anything as complex as how we use and present ourselves on social media?

It’s analogous, again, to a car: you can own one and use it every day for a commute, or use it rarely, or not have one at all — who would judge you? It has nothing to do with what cars are or aren’t, and everything to do with what a person wants or needs in the circumstances of their own life.

For instance, I made the choice to remove Facebook from my phone over a year ago. I’m happier and less distracted, and engage with it deliberately, on my terms, rather than it reaching out and engaging me. But I have friends who maintain and derive great value from their loose network of scattered acquaintances, and enjoy the immediacy of knowing and interacting with them on the scale of minutes or seconds. And I have friends who have never been drawn to the platform in the first place, content to select from the myriad other ways to stay in touch.

These are all perfectly good ways to use Facebook! Yet only a few years ago the zeitgeist around social media and its exaggerated role in everyday life — resulting from novelty for the most part — meant that to engage only sporadically would be more difficult, and to disengage entirely would be to miss out on a great deal (or fear that enough that quitting became fraught with anxiety). People would be surprised that you weren’t on Facebook and wonder how you got by.

Try it and be delighted

Social networks are here to improve your life the same way that cars, keyboards, search engines, cameras, coffee makers, and everything else are: by giving you the power to do something. But those networks and the companies behind them were also exerting power over you and over society in general, the way (for example) cars and car makers exerted power over society in the ’50s and ’60s, favoring highways over public transportation.

Some people and some places, more than others, are still subject to the influence of car makers — ever try getting around L.A. without one? And the same goes for social media — ever try planning a birthday party without it? But the last few years have helped weaken that influence and allow us to make meaningful choices for ourselves.

The networks aren’t going anywhere, so you can leave and come back. Social media doesn’t control your presence.

It isn’t all or nothing, so you can engage at 100 percent, or zero, or anywhere in between. Social media doesn’t decide how you use it.

You won’t miss anything important, because you decide what is important to you. Social media doesn’t share your priorities.

Your friends won’t mind, because they know different people need different things. Social media doesn’t care about you.

Give it a shot. Pick up your phone right now and delete Facebook. Why not? The absolute worst that will happen is you download it again tomorrow and you’re back where you started. But it could also be, as it was for me and has been for many people I’ve known, like shrugging off a weight you didn’t even realize you were bearing. Try it.

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Zuckerberg didn’t make any friends in Europe today

Speaking in front of EU lawmakers today Facebook’s founder Mark Zuckerberg namechecked the GDPR’s core principles of “control, transparency and accountability” — claiming his company will deliver on all that, come Friday, when a new European Union data protection framework, GDPR, starts being applied, finally with penalties worth the enforcement.

However there was little transparency or accountability on show during the session, given the upfront questions format which saw Zuckerberg cherry-picking a few comfy themes to riff on after silently absorbing an hour of MEPs’ highly specific questions with barely a facial twitch in response.

The questions MEPs asked of Zuckerberg were wide ranging and often drilled deep into key pressure points around the ethics of Facebook’s business — ranging from how deep the app data misuse privacy scandal rabbithole goes; to whether the company is a monopoly that needs breaking up; to how users should be compensated for misuse of their data.

Is Facebook genuinely complying with GDPR, he was asked several times (unsurprisingly, given the scepticism of data protection experts on that front). Why did it choose to shift ~1.5BN users out of reach of the GDPR? Will it offer a version of its platform that lets people completely opt out of targeted advertising, as it has studiously avoided doing so so far.

Why did it refuse a public meeting with the EU parliament? Why has it spent “millions” lobbying against EU privacy rules? Will the company commit to paying taxes in the markets where it operates? What’s it doing to prevent fake accounts? What’s it doing to prevent bullying? Does it regulate content or is it a neutral platform?

Zuckerberg made like a sponge and absorbed all this fine-grained flak. But when the time came for responses the data flow was not reciprocal; Self-serving talking points on self-selected “themes” was all he had come prepared to serve up.

Yet — and here the irony is very rich indeed — people’s personal data flows liberally into Facebook, via all sorts of tracking technologies and techniques.

And as the Cambridge Analytica data misuse scandal has now made amply clear, people’s personal information has also very liberally leaked out of Facebook — oftentimes without their knowledge or consent.

But when it comes to Facebook’s own operations, the company maintains a highly filtered, extremely partial ‘newsfeed’ on its business empire — keeping a tight grip on the details of what data it collects and why.

Only last month Zuckerberg sat in Congress avoiding giving straight answers to basic operational questions. So if any EU parliamentarians had been hoping for actual transparency and genuine accountability from today’s session they would have been sorely disappointed.

Yes, you can download the data you’ve willingly uploaded to Facebook. Just don’t expect Facebook to give you a download of all the information it’s gathered and inferred about you.

The EU parliament’s political group leaders seemed well tuned to the myriad concerns now flocking around Facebook’s business. And were quick to seize on Zuckerberg’s dumbshow as further evidence that Facebook needs to be ruled.

Thing is, in Europe regulation is not a dirty word. And GDPR’s extraterritorial reach and weighty public profile looks to be further whetting political appetites.

So if Facebook was hoping the mere appearance of its CEO sitting in a chair in Brussels, going through the motions of listening before reading from his usual talking points, that looks to be a major miscalculation.

“It was a disappointing appearance by Zuckerberg. By not answering the very detailed questions by the MEPs he didn’t use the chance to restore trust of European consumers but in contrary showed to the political leaders in the European Parliament that stronger regulation and oversight is needed,” Green MEP and GDPR rapporteur Jan Philipp Albrecht told us after the meeting.

Albrecht had pressed Zuckerberg about how Facebook shares data between Facebook and WhatsApp — an issue that has raised the ire of regional data protection agencies. And while DPAs forced the company to turn off some of these data flows, Facebook continues to share other data.

The MEP had also asked Zuckerberg to commit to no exchange of data between the two apps. Zuckerberg determinedly made no such commitment.

Claude Moraes, chair of the EU parliament’s civil liberties, justice and home affairs (Libe) committee, issued a slightly more diplomatic reaction statement after the meeting — yet also with a steely undertone.

“Trust in Facebook has suffered as a result of the data breach and it is clear that Mr. Zuckerberg and Facebook will have to make serious efforts to reverse the situation and to convince individuals that Facebook fully complies with European Data Protection law. General statements like ‘We take privacy of our customers very seriously’ are not sufficient, Facebook has to comply and demonstrate it, and for the time being this is far from being the case,” he said.

“The Cambridge Analytica scandal was already in breach of the current Data Protection Directive, and would also be contrary to the GDPR, which is soon to be implemented. I expect the EU Data Protection Authorities to take appropriate action to enforce the law.”

Damian Collins, chair of the UK parliament’s DCMS committee, which has thrice tried and failed to get Zuckerberg to appear before it, did not mince his words at all. Albeit he has little reason to, having been so thoroughly rejected by the Facebook founder — and having accused the company of a pattern of evasive behavior to its CTO’s face — there’s clearly not much to hold out for now.

“What a missed opportunity for proper scrutiny on many crucial questions raised by the MEPs. Questions were blatantly dodged on shadow profiles, sharing data between WhatsApp and Facebook, the ability to opt out of political advertising and the true scale of data abuse on the platform,” said Collins in another reaction statement after the meeting. “Unfortunately the format of questioning allowed Mr Zuckerberg to cherry-pick his responses and not respond to each individual point.

“I echo the clear frustration of colleagues in the room who felt the discussion was shut down,” he added, ending with a fourth (doubtless equally forlorn) request for Zuckerberg to appear in front of the DCMS Committee to “provide Facebook users the answers they deserve”.

In the latter stages of today’s EU parliament session several MEPs — clearly very exasperated by the straightjacked format — resorted to heckling Zuckerberg to press for answers he had not given them.

“Shadow profiles,” interjected one, seizing on a moment’s hesitation as Zuckerberg sifted his notes for the next talking point. “Compensation,” shouted another, earning a snort of laughter from the CEO and some more theatrical note flipping to buy himself time.

Then, appearing slightly flustered, Zuckerberg looked up at one of the hecklers and said he would engage with his question — about shadow profiles (though Zuckerberg dare not speak that name, of course, given he claims not to recognize it) — arguing Facebook needs to hold onto such data for security purposes.

Zuckerberg did not specify, as MEPs had asked him to, whether Facebook uses data about non-users for any purposes other than the security scenario he chose to flesh out (aka “keeping bad content out”, as he put it).

He also ignored a second follow-up pressing him on how non-users can “stop that data being transferred”.

“On the security side we think it’s important to keep it to protect people in our community,” Zuckerberg said curtly, before turning to his lawyer for a talking point prompt (couched as an ask if there are “any other themes we wanted to get through”).

His lawyer hissed to steer the conversation back to Cambridge Analytica — to Facebook’s well-trodden PR about how they’re “locking down the platform” to stop any future data heists — and the Zuckbot was immediately back in action regurgitating his now well-practiced crisis PR around the scandal.

What was very clearly demonstrated during today’s session was the Facebook founder’s preference for control — that’s to say control which he is exercising.

Hence the fixed format of the meeting, which had been negotiated prior to Facebook agreeing to meet with EU politicians, and which clearly favored the company by allowing no formal opportunity for follow ups from MEPs.

Zuckerberg also tried several times to wrap up the meeting — by insinuating and then announcing time was up. MEPs ignored these attempts, and Zuckerberg seemed most uncomfortable at not having his orders instantly carried out.

Instead he had to sit and watch a micro negotiation between the EU parliament’s president and the political groups over whether they would accept written answers to all their specific questions from Facebook — before he was publicly put on the spot by president Antonio Tajani to agree to provide the answers in writing.

Although, as Collins has already warned MEPs, Facebook has had plenty of practice at generating wordy but empty responses to politicians’ questions about its business processes — responses which evade the spirit and specifics of what’s being asked.

The self-control on show from Zuckerberg today is certainly not the kind of guardrails that European politicians increasingly believe social media needs. Self-regulation, observed several MEPs to Zuckerberg’s face, hasn’t worked out so well has it?

The first MEP to lay out his questions warned Zuckerberg that apologizing is not enough. Another pointed out he’s been on a contrition tour for about 15 years now.

Facebook needs to make a “legal and moral commitment” to the EU’s fundamental values, he was told by Moraes. “Remember that you’re here in the European Union where we created GDPR so we ask you to make a legal and moral commitment, if you can, to uphold EU data protection law, to think about ePrivacy, to protect the privacy of European users and the many millions of European citizens and non-Facebook users as well,” said the Libe committee chair.

But self-regulation — or, the next best thing in Zuckerberg’s eyes: ‘Facebook-shaped regulation’ — was what he had come to advocate for, picking up on the MEPs’ regulation “theme” to respond with the same line he fed to Congress: “I don’t think the question here is whether or not there should be regulation. I think the question is what is the right regulation.”

“The Internet is becoming increasingly important in people’s lives. Some sort of regulation is important and inevitable. And the important thing is to get this right,” he continued. “To make sure that we have regulatory frameworks that help protect people, that are flexible so that they allow for innovation, that don’t inadvertently prevent new technologies like AI from being able to develop.”

He even brought up startups — claiming ‘bad regulation’ (I paraphrase) could present a barrier to the rise of future dormroom Zuckerbergs.

Of course he failed to mention how his own dominant platform is the attention-sapping, app gobbling elephant in the room crowding out the next generation of would-be entrepreneurs. But MEPs’ concerns about competition were clear.

Instead of making friends and influencing people in Brussels, Zuckerberg looks to have delivered less than if he’d stayed away — angering and alienating the very people whose job it will be to amend the EU legislation that’s coming down the pipe for his platform.

Ironically one of the few specific questions Zuckerberg chose to answer was a false claim by MEP Nigel Farage — who had wondered whether Facebook is still a “neutral political platform”, griping about drops in engagement for rightwing entities ever since Facebook’s algorithmic changes in January, before claiming, erroneously, that Facebook does not disclose the names of the third party fact checkers it uses to help it police fake news.

So — significantly, and as was also evident in the US Senate and Congress — Facebook was taking flak from both left and right of political spectrum, implying broad, cross-party support for regulating these algorithmic platforms.

Actually Facebook does disclose those fact checking partnerships. But it’s pretty telling that Zuckerberg chose to expend some of his oh-so-slender speaking time to debunk something that really didn’t merit the breath.

Farage had also claimed, during his three minutes, that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”. 

Funnily enough Zuckerberg didn’t make time to comment on that.

Facebook faces fresh criticism over ad targeting of sensitive interests

Is Facebook trampling over laws that regulate the processing of sensitive categories of personal data by failing to ask people for their explicit consent before it makes sensitive inferences about their sex life, religion or political beliefs? Or is the company merely treading uncomfortably and unethically close to the line of the law?

An investigation by the Guardian and the Danish Broadcasting Corporation has found that Facebook’s platform allows advertisers to target users based on interests related to political beliefs, sexuality and religion — all categories that are marked out as sensitive information under current European data protection law.

And indeed under the incoming GDPR, which will apply across the bloc from May 25.

The joint investigation found Facebook’s platform had made sensitive inferences about users — allowing advertisers to target people based on inferred interests including communism, social democrats, Hinduism and Christianity. All of which would be classed as sensitive personal data under EU rules.

And while the platform offers some constraints on how advertisers can target people against sensitive interests — not allowing advertisers to exclude users based on a specific sensitive interest, for example (Facebook having previously run into trouble in the US for enabling discrimination via ethnic affinity-based targeting) — such controls are beside the point if you take the view that Facebook is legally required to ask for a user’s explicit consent to processing this kind of sensitive data up front, before making any inferences about a person.

Indeed, it’s very unlikely that any ad platform can put people into buckets with sensitive labels like ‘interested in social democrat issues’ or ‘likes communist pages’ or ‘attends gay events’ without asking them to let it do so first.

And Facebook is not asking first.

Facebook argues otherwise, of course — claiming that the information it gathers about people’s affinities/interests, even when they entail sensitive categories of information such as sexuality and religion, is not personal data.

In a response statement to the media investigation, a Facebook spokesperson told us:

Like other Internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data. This means that someone could have an ad interest listed as ‘Gay Pride’ because they have liked a Pride associated Page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality. People are able to manage their Ad Preferences tool, which clearly explains how advertising works on Facebook and provides a way to tell us if you want to see ads based on specific interests or not. When interests are removed, we show people the list of removed interests so that they have a record they can access, but these interests are no longer used for ads. Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.

Expect Facebook’s argument to be tested in the courts — likely in the very near future.

As we’ve said before, the GDPR lawsuits are coming for the company, thanks to beefed up enforcement of EU privacy rules, with the regulation providing for fines as large as 4% of a company’s global turnover.

Facebook is not the only online people profiler, of course, but it’s a prime target for strategic litigation both because of its massive size and reach (and the resulting power over web users flowing from a dominant position in an attention-dominating category), but also on account of its nose-thumbing attitude to compliance with EU regulations thus far.

The company has faced a number of challenges and sanctions under existing EU privacy law — though for its operations outside the US it typically refuses to recognize any legal jurisdiction except corporate-friendly Ireland, where its international HQ is based.

And, from what we’ve seen so far, Facebook’s response to GDPR ‘compliance’ is no new leaf. Rather it looks like privacy-hostile business as usual; a continued attempt to leverage its size and power to force a self-serving interpretation of the law — bending rules to fit its existing business processes, rather than reconfiguring those processes to comply with the law.

The GDPR is one of the reasons why Facebook’s ad microtargeting empire is facing greater scrutiny now, with just weeks to go before civil society organizations are able to take advantage of fresh opportunities for strategic litigation allowed by the regulation.

“I’m a big fan of the GDPR. I really believe that it gives us — as the court in Strasbourg would say — effective and practical remedies,” law professor Mireille Hildebrandt tells us. “If we go and do it, of course. So we need a lot of public litigation, a lot of court cases to make the GDPR work but… I think there are more people moving into this.

“The GDPR created a market for these sort of law firms — and I think that’s excellent.”

But it’s not the only reason. Another reason why Facebook’s handling of personal data is attracting attention is the result of tenacious press investigations into how one controversial political consultancy, Cambridge Analytica, was able to gain such freewheeling access to Facebook users’ data — as a result of Facebook’s lax platform policies around data access — for, in that instance, political ad targeting purposes.

All of which eventually blew up into a major global privacy storm, this March, though criticism of Facebook’s privacy-hostile platform policies dates back more than a decade at this stage.

The Cambridge Analytica scandal at least brought Facebook CEO and founder Mark Zuckerberg in front of US lawmakers, facing questions about the extent of the personal information it gathers; what controls it offers users over their data; and how he thinks Internet companies should be regulated, to name a few. (Pro tip for politicians: You don’t need to ask companies how they’d like to be regulated.)

The Facebook founder has also finally agreed to meet EU lawmakers — though UK lawmakers’ calls have been ignored.

Zuckerberg should expect to be questioned very closely in Brussels about how his platform is impacting European’s fundamental rights.

Sensitive personal data needs explicit consent

Facebook infers affinities linked to individual users by collecting and processing interest signals their web activity generates, such as likes on Facebook Pages or what people look at when they’re browsing outside Facebook — off-site intel it gathers via an extensive network of social plug-ins and tracking pixels embedded on third party websites. (According to information released by Facebook to the UK parliament this week, during just one week of April this year its Like button appeared on 8.4M websites; the Share button appeared on 931,000 websites; and its tracking Pixels were running on 2.2M websites.)

But here’s the thing: Both the current and the incoming EU legal framework for data protection sets the bar for consent to processing so-called special category data equally high — at “explicit” consent.

What that means in practice is Facebook needs to seek and secure separate consents from users (such as via a dedicated pop-up) for collecting and processing this type of sensitive data.

The alternative is for it to rely on another special condition for processing this type of sensitive data. However the other conditions are pretty tightly drawn — relating to things like the public interest; or the vital interests of a data subject; or for purposes of “preventive or occupational medicine”.

None of which would appear to apply if, as Facebook is, you’re processing people’s sensitive personal information just to target them with ads.

Ahead of GDPR, Facebook has started asking users who have chosen to display political opinions and/or sexuality information on their profiles to explicitly consent to that data being public.

Though even there its actions are problematic, as it offers users a take it or leave it style ‘choice’ — saying they either remove the info entirely or leave it and therefore agree that Facebook can use it to target them with ads.

Yet EU law also requires that consent be freely given. It cannot be conditional on the provision of a service.

So Facebook’s bundling of service provisions and consent will also likely face legal challenges, as we’ve written before.

“They’ve tangled the use of their network for socialising with the profiling of users for advertising. Those are separate purposes. You can’t tangle them like they are doing in the GDPR,” says Michael Veale, a technology policy researcher at University College London, emphasizing that GDPR allows for a third option that Facebook isn’t offering users: Allowing them to keep sensitive data on their profile but that data not be used for targeted advertising.

“Facebook, I believe, is quite afraid of this third option,” he continues. “It goes back to the Congressional hearing: Zuckerberg said a lot that you can choose which of your friends every post can be shared with, through a little in-line button. But there’s no option there that says ‘do not share this with Facebook for the purposes of analysis’.”

Returning to how the company synthesizes sensitive personal affinities from Facebook users’ Likes and wider web browsing activity, Veale argues that EU law also does not recognize the kind of distinction Facebook is seeking to draw — i.e. between inferred affinities and personal data — and thus to try to redraw the law in its favor.

“Facebook say that the data is not correct, or self-declared, and therefore these provisions do not apply. Data does not have to be correct or accurate to be personal data under European law, and trigger the protections. Indeed, that’s why there is a ‘right to rectification’ — because incorrect data is not the exception but the norm,” he tells us.

“At the crux of Facebook’s challenge is that they are inferring what is arguably “special category” data (Article 9, GDPR) from non-special category data. In European law, this data includes race, sexuality, data about health, biometric data for the purposes of identification, and political opinions. One of the first things to note is that European law does not govern collection and use as distinct activities: Both are considered processing.

“The pan-European group of data protection regulators have recently confirmed in guidance that when you infer special category data, it is as if you collected it. For this to be lawful, you need a special reason, which for most companies is restricted to separate, explicit consent. This will be often different than the lawful basis for processing the personal data you used for inference, which might well be ‘legitimate interests’, which didn’t require consent. That’s ruled out if you’re processing one of these special categories.”

“The regulators even specifically give Facebook like inference as an example of inferring special category data, so there is little wiggle room here,” he adds, pointing to an example used by regulators of a study that combined Facebook Like data with “limited survey information” — and from which it was found that researchers could accurately predict a male user’s sexual orientation 88% of the time; a user’s ethnic origin 95% of the time; and whether a user was Christian or Muslim 82% of the time.

Which underlines why these rules exist — given the clear risk of breaches to human rights if big data platforms can just suck up sensitive personal data automatically, as a background process.

The overarching aim of GDPR is to give consumers greater control over their personal data not just to help people defend their rights but to foster greater trust in online services — and for that trust to be a mechanism for greasing the wheels of digital business. Which is pretty much the opposite approach to sucking up everything in the background and hoping your users don’t realize what you’re doing.

Veale also points out that under current EU law even an opinion on someone is their personal data… (per this Article 29 Working Party guidance, emphasis ours):

From the point of view of the nature of the information, the concept of personal data includes any sort of statements about a person. It covers “objective” information, such as the presence of a certain substance in one’s blood. It also includes “subjective” information, opinions or assessments. This latter sort of statements make up a considerable share of personal data processing in sectors such as banking, for the assessment of the reliability of borrowers (“Titius is a reliable borrower”), in insurance (“Titius is not expected to die soon”) or in employment (“Titius is a good worker and merits promotion”).

We put that specific point to Facebook — but at the time of writing we’re still waiting for a response. (Nor would Facebook provide a public response to several other questions we asked around what it’s doing here, preferring to limit its comment to the statement at the top of this post.)

Veale adds that the WP29 guidance has been upheld in recent CJEU cases such as Nowak — which he says emphasized that, for example, annotations on the side of an exam script are personal data.

He’s clear about what Facebook should be doing to comply with the law: “They should be asking for individuals’ explicit, separate consent for them to infer data including race, sexuality, health or political opinions. If people say no, they should be able to continue using Facebook as normal without these inferences being made on the back-end.”

“They need to tell individuals about what they are doing clearly and in plain language,” he adds. “Political opinions are just as protected here, and this is perhaps more interesting than race or sexuality.”

“They certainly should face legal challenges under the GDPR,” agrees Paul Bernal, senior lecturer in law at the University of East Anglia, who is also critical of how Facebook is processing sensitive personal information. “The affinity concept seems to be a pretty transparent attempt to avoid legal challenges, and one that ought to fail. The question is whether the regulators have the guts to make the point: It undermines a quite significant part of Facebook’s approach.”

“I think the reason they’re pushing this is that they think they’ll get away with it, partly because they think they’ve persuaded people that the problem is Cambridge Analytica, as rogues, rather than Facebook, as enablers and supporters. We need to be very clear about this: Cambridge Analytica are the symptom, Facebook is the disease,” he adds.

“I should also say, I think the distinction between ‘targeting’ being OK and ‘excluding’ not being OK is also mostly Facebook playing games, and trying to have their cake and eat it. It just invites gaming of the systems really.”

Facebook claims its core product is social media, rather than data-mining people to run a highly lucrative microtargeted advertising platform.

But if that’s true why then is it tangling its core social functions with its ad-targeting apparatus — and telling people they can’t have a social service unless they agree to interest-based advertising?

It could support a service with other types of advertising, which don’t depend on background surveillance that erodes users’ fundamental rights.  But it’s choosing not to offer that. All you can ‘choose’ is all or nothing. Not much of a choice.

Facebook telling people that if they want to opt out of its ad targeting they must delete their account is neither a route to obtain meaningful (and therefore lawful) consent — nor a very compelling approach to counter criticism that its real business is farming people.

The issues at stake here for Facebook, and for the shadowy background data-mining and brokering of the online ad targeting industry as a whole, are clearly far greater than any one data misuse scandal or any one category of sensitive data. But Facebook’s decision to retain people’s sensitive personal data for ad targeting without asking for consent up-front is a telling sign of something gone very wrong indeed.

If Facebook doesn’t feel confident asking its users whether what it’s doing with their personal data is okay or not, maybe it shouldn’t be doing it in the first place.

At very least it’s a failure of ethics. Even if the final judgement on Facebook’s self-serving interpretation of EU privacy rules will have to wait for the courts to decide.

There’s an access code for Visible in its online ad

Sometimes it pays to click on the ads.

Verizon’s poorly kept startup secret, Visible, the new app-based phone service that provides all-in voice, data and text for $40 does have an access code for the hoi polloi.

It’s in the online ads for the company’s service and it’s: 0abfa

You’re welcome.

 

Google is banning Irish abortion referendum ads ahead of vote

Google is suspending adverts related to a referendum in Ireland on whether or not to overturn a constitutional clause banning abortion. The vote is due to take place in a little over two weeks time.

“Following our update around election integrity efforts globally, we have decided to pause all ads related to the Irish referendum on the eighth amendment,” a Google spokesperson told us.

The spokesperson said enforcement of the policy — which will cover referendum adverts that appear alongside Google search results and on its video sharing platform YouTube — will begin in the next 24 hours, with the pause remaining in effect through the referendum, with the vote due to take place on May 25.

The move follows an announcement by Facebook yesterday saying it had stopped accepting referendum related ads paid for by foreign entities. However Google is going further and pausing all ads targeting the vote.

Given the sensitivity of the issue a blanket ban is likely the least controversial option for the company, as well as also the simplest to implement — whereas Facebook has said it has been liaising with local groups for some time, and has created a dedicated channel where ads that might be breaking its ban on foreign buyers can be reported by the groups, generating reports that Facebook will need to review and act on quickly.

Given how close the vote now is both tech giants have been accused of acting too late to prevent foreign interests from using their platforms to exploit a loophole in Irish law to get around a ban on foreign donations to political campaigns by pouring money into unregulated digital advertising instead.

Speaking to the Guardian, a technology spokesperson for Ireland’s opposition party Fianna Fáil, described Google’s decision to ban the adverts as “too late in the day”.

“Fake news has already had a corrosive impact on the referendum debate on social media,” James Lawless TD told it, adding that the referendum campaign had made it clear Ireland needs legislation to restrict the activities of Internet companies’ ad products “in the same way that steps were taken in the past to regulate political advertising on traditional forms of print and broadcast media”.

We’ve asked Google why it’s only taken the decision to suspend referendum ad buys now, and why it did not act months earlier — given the Irish government announced its intention to hold a 2018 referendum on repealing the Eighth Amendment in mid 2017 — and will update this post with any response.

In a public policy blog post earlier this month, the company’s policy SVP Kent Walker talked up the steps the company is taking to (as he put it) “support… election integrity through greater advertising transparency”, saying it’s rolling out new policies for U.S. election ads across its platforms, including requiring additional verification for election ad buyers, such as confirmation that an advertiser is a U.S. citizen or lawful permanent resident.

However this U.S.-first focus leaves other regions vulnerable to election fiddlers — hence Google deciding to suspend ad buys around the Irish vote, albeit tardily.

The company has also previously said it will implement a system of disclosures for ad buyers to make it clear to users who paid for the ad, and that it will be publishing a Transparency Report this summer breaking out election ad purchases. It also says it’s building a searchable library for election ads.

Although it’s not clear when any of these features will be rolled out across all regions where Google ads are served.

Facebook has also announced a raft of similar transparency steps related to political ads in recent years — responding to political pressure and scrutiny following revelations about the extent of Kremlin-backed online disinformation campaigns that had targeted the 2016 US presidential election.