When it comes to public perception, Facebook hasn't had the best 2018 so far. Over the past few months, the company's been working hard to clean up its image after dealing with a series of nightmares, including the idea that the site was becoming tox…
Tech giants are busy updating their T&Cs ahead of the EU’s incoming data protection framework, GDPR. Which is why, for instance, Facebook-owned Instagram is suddenly offering a data download tool. You can thank European lawmakers for being able to take your data off that platform.
Facebook -owned WhatsApp is also making a pretty big change as a result of GDPR — noting in its FAQs that it’s raising the minimum age for users of the messaging platform to 16 across the “European Region“. This includes in both EU and non-EU countries (such as Switzerland), as well as the in-the-process-of-brexiting UK (which is set to leave the EU next year).
In the US, the minimum age for WhatsApp usage remains 13.
Where teens are concerned GDPR introduces a new provision concerning children’s personal data — setting a 16-year-old age limit on kids being able to consent to their data being processed — although it does allow some wiggle room for individual countries to write a lower age limit into their laws, setting a hard cap at 13-years-old.
WhatsApp isn’t bothering to try to vary the age gate depending on limits individual EU countries have set, though. Presumably to reduce the complexity of complying with the new rules.
But also likely because it’s confident WhatsApp-loving teens won’t have any trouble circumventing the new minimum age limit. And therefore that there’s no real risk to its business because teenagers will easily ignore the rules.
Certainly it’s unclear whether WhatsApp and its parent Facebook will do anything at all to enforce the age limit — beyond asking users to state they are at least 16 (and taking them at their word). So in practice, while on paper the 16-years-old minimum seems like a big deal, the change may do very little to protect teens from being data-mined by the ad giant.
We’ve asked WhatsApp whether it will cross-check users’ accounts with Facebook accounts and data holdings to try to verify a teen really is 16, for example, but nothing in its FAQ on the topic suggests it plans to carry out any active enforcement at all — instead it merely notes:
- Creating an account with false information is a violation of our Terms
- Registering an account on behalf of someone who is underage is also a violation of our Terms
Ergo, that does sound very much like a buck being passed. And it will likely be up to parents to try to actively enforce the limit — by reporting their own underage WhatApp-using kids to the company (which would then have to close the account). Clearly few parents would relish the prospect of doing that.
Yet Facebook does already share plenty of data between WhatsApp and its other companies for all sorts of self-serving, business-enhancing purposes — and even including, as it couches it, “to ensure safety and security”. So it’s hardly short of data to carry out some age checks of its own and proactively enforce the limit.
One curious difference is that Facebook’s approach to teen usage of WhatsApp is notably distinct to the one it’s taking with teens on its main social platform — also as it reworks the Facebook T&Cs ahead of GDPR.
Under the new terms there Facebook users between the ages of 13 and 15 will need to get parental permission to be targeted with ads or share sensitive info on Facebook.
But again, as my TC colleague Josh Constine pointed out, the parental consent system Facebook has concocted is laughably easy for teens to circumvent — merely requiring they select one of their Facebook friends or just enter an email address (which could literally be an alternative email address they themselves control). That entirely unverified entity is then asked to give ‘consent’ for their ‘child’ to share sensitive info. So, basically, a total joke.
As we’ve said before, Facebook’s approach to GDPR ‘compliance’ is at best described as ‘doing the minimum possible’. And data protection experts say legal challenges are inevitable.
Also in Europe Facebook has previously been forced via regulatory intervention to give up one portion of the data sharing between its platforms — specifically for ad targeting purposes. However its WhatsApp T&Cs also suggest it is confident it will find a way to circumvent that in future, as it writes it “will only do so when we reach an understanding with the Irish Data Protection Commissioner on a future mechanism to enable such use” — i.e. when, not if.
Last month it also signed an undertaking with the DPC on this related to GDPR compliance, so again appears to have some kind of regulatory-workaround ‘mechanism’ in the works.
Trying times in Menlo Park, it seems: Amid assaults from all quarters largely focused on privacy, Facebook is shifting some upper management around to better defend itself. Its head of policy in the U.S., Erin Egan, is returning to her chief privacy officer role, and a VP (and former FCC chairman) is taking her spot.
Kevin Martin, until very recently VP of mobile and global access policy, will be Facebook’s new head of policy. He was hired in 2015 for that job; he was at the FCC from 2001 to 2009, Chairman for the last four of those years. So whether you liked his policies or not, he clearly knows his way around a roll of red tape.
Erin Egan was chief privacy officer when Martin was hired, and at that time also took on the role of U.S. head of policy. “For the last couple years, Erin wore both hats at the company,” said Facebook spokesperson Andy Stone in a statement to TechCrunch.
“Kevin will become interim head of US Public Policy while Erin Egan focuses on her expanded duties as Chief Privacy Officer,” Stone said.
No doubt both roles have grown in importance and complexity over the last few years; one person performing both jobs doesn’t sound sustainable, and apparently it wasn’t.
Notably, Martin will now report to Joel Kaplan, with whom he worked previously during the Bush-Cheney campaign in 2000 and for years under the subsequent administration. Deep ties to Republican administrations and networks in Washington are probably more than a little valuable these days, especially to a company under fire from would-be regulators.
GOG is introducing user profiles, which will bring a touch of social interaction to the DRM-free games marketplace. The platform has hesitantly modernized over the years to compete with Steam, adding films and in-development titles, and the new profi…
Earlier this month — and before Facebook CEO Mark Zuckerberg testified before Congress — the company announced a series of changes to how it would handle political advertisements running on its platform in the future. It had said that people who wanted to buy a political ad — including ads about political “issues” — would have to reveal their identities and location and be verified before the ads could run. Information about the advertiser would also display to Facebook users.
Today, Facebook is announcing the authorization process for U.S. political ads is live.
Facebook had first said in October that political advertisers would have to verify their identity and location for election-related ads. But in April, it expanded that requirement to include any “issue ads” — meaning those on political topics being debated across the country, not just those tied to an election.
Facebook said it would work with third parties to identify the issues. These ads would then be labeled as “Political Ads,” and display the “paid for by” information to end users.
According to today’s announcement, Facebook will now begin to verify the identity and the residential mailing address of advertisers who want to run political ads. Those advertisers will also have to disclose who’s paying for the ads as part of this authorization process.
This verification process is currently only open in the U.S. and will require Page admins and ad account admins to submit their government-issued ID to Facebook, along with their residential mailing address.
The government ID can either be a U.S. passport or U.S. driver’s license, a FAQ explains. Facebook will also ask for the last four digits of admins’ Social Security Number. The photo ID will then be approved or denied in a matter of minutes, though anyone declined based on the quality of the uploaded images won’t be prevented from trying again.
The address, however, will be verified by mailing a letter with a unique access code that only the admin’s Facebook account can use. The letter may take up to 10 days to arrive, Facebook notes.
Along with the verification portion, Page admins will also have to fill in who paid for the ad in the “disclaimer” section. This has to include the organization(s) or person’s name(s) who funded it.
This information will also be reviewed prior to approval, but Facebook isn’t going to fact check this field, it seems.
Instead, the company simply says: “We’ll review each disclaimer to make sure it adheres to our advertising policies. You can edit your disclaimers at any time, but after each edit, your disclaimer will need to be reviewed again, so it won’t be immediately available to use.”
The FAQ later states that disclaimers must comply with “any applicable law,” but again says that Facebook only reviews them against its ad policies.
“It’s your responsibility as the advertiser to independently assess and ensure that your ads are in compliance with all applicable election and advertising laws and regulations,” the documentation reads.
Along with the launch of the new authorization procedures, Facebook has released a Blueprint training course to guide advertisers through the steps required, and has published an FAQ to answer advertisers’ questions.
Of course, these procedures will only net the more scrupulous advertisers willing to play by the rules. That’s why Facebook had said before that it plans to use AI technology to help sniff out those advertisers who should have submitted to verification, but did not. The company is also asking people to report suspicious ads using the “Report Ad” button.
Facebook has been under heavy scrutiny because of how its platform was corrupted by Russian trolls on a mission to sway the 2016 election. The Justice Department charged 13 Russians and three companies with election interference earlier this year, and Facebook has removed hundreds of accounts associated with disinformation campaigns.
While tougher rules around ads may help, they alone won’t solve the problem.
It’s likely that those determined to skirt the rules will find their own workarounds. Plus, ads are only one of many issues in terms of those who want to use Facebook for propaganda and misinformation. On other fronts, Facebook is dealing with fake news — including everything from biased stories to those that are outright lies, intending to influence public opinion. And of course there’s the Cambridge Analytica scandal, which led to intense questioning of Facebook’s data privacy practices in the wake of revelations that millions of Facebook users had their information improperly accessed.
Facebook says the political ads authorization process is gradually rolling out, so it may not be available to all advertisers at this time. Currently, users can only set up and manage authorizations from a desktop computer from the Authorizations tab in a Facebook Page’s Settings.
In an interesting twist, Facebook is being sued in the UK for defamation by consumer advice personality, Martin Lewis, who says his face and name have been repeatedly used on fake adverts distributed on the social media giant’s platform.
Lewis, who founded the popular MoneySavingExpert.com tips website, says Facebook has failed to stop the fake ads despite repeat complaints and action on his part, thereby — he contends — tarnishing his reputation and causing victims to be lured into costly scams.
“It is consistent, it is repeated. Other companies such as Outbrain who have run these adverts have taken them down. What is particularly pernicious about Facebook is that it says the onus is on me, so I have spent time and effort and stress repeatedly to have them taken down,” Lewis told The Guardian.
“It is facilitating scams on a constant basis in a morally repugnant way. If Mark Zuckerburg wants to be the champion of moral causes, then he needs to stop its company doing this.”
In a blog post Lewis also argues it should not be difficult for Facebook — “a leader in face and text recognition” — to prevent scammers from misappropriating his image.
“I don’t do adverts. I’ve told Facebook that. Any ad with my picture or name in is without my permission. I’ve asked it not to publish them, or at least to check their legitimacy with me before publishing. This shouldn’t be difficult,” he writes. “Yet it simply continues to repeatedly publish these adverts and then relies on me to report them, once the damage has been done.”
“Enough is enough. I’ve been fighting for over a year to stop Facebook letting scammers use my name and face to rip off vulnerable people – yet it continues. I feel sick each time I hear of another victim being conned because of trust they wrongly thought they were placing in me. One lady had over £100,000 taken from her,” he adds.
Some of the fake ads appear to be related to cryptocurrency scams — linking through to fake news articles promising “revolutionary Bitcoin home-based opportunity”.
So the scammers look to be using the same playbook as the Macedonian teens who, in 2016, concocted fake news stories about US politics to generate a mint in ad clicks — also relying on Facebook’s platform to distribute their fakes and scale the scam.
In January Facebook revised its ads policy to specifically ban cryptocurrency, binary options and initial coin offerings. But as Lewis’ samples show, the scammers are circumventing this prohibition with ease — using Lewis’ image to drive unwitting clicks to a secondary offsite layer of fake news articles that directly push people towards crypto scams.
It would appear that Facebook does nothing to verify the sites to which ads on its platform are directing its users, just as it does not appear to proactive police whether ad creative is legal — at least unless nudity is involved.
Here’s one sample fake ad that Lewis highlights:
And here’s the fake news article it links to — touting a “revolutionary” Bitcoin opportunity, in a news article style mocked up to look like the Daily Mirror newspaper…
The lawsuit is a personal action by Lewis who is seeking exemplary damages in the high court. He says he’s not looking to profit himself — saying he would donate any winnings to charities that aim to combat fraud. Rather he says he’s taking the action in the hopes the publicity will spotlight the problem and force Facebook to stamp out fake ads.
In a statement, Mark Lewis of the law firm Seddons, which Lewis has engaged for the action, said: “Facebook is not above the law – it cannot hide outside the UK and think that it is untouchable. Exemplary damages are being sought. This means we will ask the court to ensure they are substantial enough that Facebook can’t simply see paying out damages as just the ‘cost of business’ and carry on regardless. It needs to be shown that the price of causing misery is very high.”
In a response statement to the suit, a Facebook spokesperson told us: “We do not allow adverts which are misleading or false on Facebook and have explained to Martin Lewis that he should report any adverts that infringe his rights and they will be removed. We are in direct contact with his team, offering to help and promptly investigating their requests, and only last week confirmed that several adverts and accounts that violated our Advertising Policies had been taken down.”
Facebook’s ad guidelines do indeed prohibit ads that contain “deceptive, false, or misleading content, including deceptive claims, offers, or business practices” — and, as noted above, they also specifically prohibit cryptocurrency-related ads.
But, as is increasingly evident where big tech platforms are concerned, meaningful enforcement of existing policies is what’s sorely lacking.
The social behemoth claims to have invested significant resources in its ad review program — which includes both automated and manual review of ads. Though it also relies on users reporting problem content, thereby shifting the burden of actively policing content its systems are algorithmically distributing and monetizing (at massive scale) onto individual users (who are, by the by, not being paid for all this content review labor… hmmm… ).
In Lewis’ case the burden is clearly also highly personal, given the fake ads are not just dodgy content but are directly misappropriating his image and name in an attempt to sell a scam.
“On a personal note, as well as the huge amount of time, stress and effort it takes to continually combat these scams, this whole episode has been extremely depressing – to see my reputation besmirched by such a big company, out of an unending greed to keep raking in its ad cash,” he also writes.
The sheer scale of Facebook’s platform — which now has more than 2BN active users globally — contrasts awkwardly with the far smaller number of people the company employs for content moderation tasks.
And unsurprisingly, given that huge discrepancy, Facebook has been facing increasing pressure over various types of problem content in recent years — from Kremlin propaganda to hate speech in Myanmar.
Last year it told US lawmakers it would be increasing the number of staff working on safety and security issues from 10,000 to 20,000 by the end of this year. Which is still a tiny drop in the ocean of content distributed daily on its platform. We’ve asked how many people work in Facebook’s ad review team specifically and will update this post with any response.
Given the sheer scale of content continuously generated by a 2BN+ user-base, combined with a platform structure that typically allows for instant uploads, a truly robust enforcement of Facebook’s own policies is going to require legislative intervention.
And, in the meanwhile, Facebook operating a policy that’s essentially unenforceable risks looking intentional — given how much profit the company continues to generate by being able to claim it’s just a platform, rather than be ruled like a publisher.
A shower of paper airplanes darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor (RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service (Telegram has refused to comply).
The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred term — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.
And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.
“We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports,” said a Google spokesperson in an emailed response. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.
(Amazon has acknowledged our messages but has yet to reply to them.)
Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.
Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.
“RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to 2 million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country (and, likely, more considering all the free publicity it’s been getting).
As we described earlier this week, the reason so many IP addresses are getting blocked is because Telegram has been using a technique that allows it to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to using for nearly a year when the RKN announced its own ban.
Zello ceased its activities earlier this year when RKN got wise to Zello’s ways and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same kind of hopping, RKN, in effect, knew just what to do to turn the screws. (And it also took the heat off Zello, which miraculously got restored.)
So far, Telegram’s cloud partners have held strong and have not taken the same route, although getting its own services blocked could see Google’s resolve tested at a new level.
Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.
We’ll update this post and continue to write on further developments as we learn more.
Two photo-sharing services are teaming up, as SmugMug buys Flickr from Verizon’s digital media subsidiary Oath.
At the same time, he said he’s still figuring out his actual plans: “It sounds silly for the CEO to not to totally know what he’s going to do, but we haven’t built SmugMug on a master plan either. We try to listen to our customers and when enough of them ask for something that’s important to them or to the community, we go and build it.”
Flickr was founded in 2004 and sold to Yahoo a year later. Yahoo, in turn, was acquired by Verizon, which brought it together with AOL to create a new subsidiary called Oath.
Over the past couple of months, Oath (which owns TechCrunch) has been selling off some of its AOL and Yahoo properties, including Moviefone (sold to the company behind MoviePass, which Oath now has a stake in) and Polyvore (assets sold to Ssense).
In an FAQ about the deal, SmugMug says it will continue to operate Flickr as a separate site, with no merging of user accounts or photos: “Over time, we’ll be migrating Flickr onto SmugMug’s technology infrastructure, and your Flickr photos will move as a part of this migration — but the photos themselves will remain on Flickr.”
The company also uses the FAQ to describe its vision for the combined services:
SmugMug and Flickr represent the world’s most influential community of photographers, and there is strength in numbers. We want to provide photographers with both inspiration and the tools they need to tell their stories. We want to bring excitement and energy to inspire more photographers to share their perspective. And we want to be a welcome place for all photographers: hobbyist to archivist to professional.
The financial terms were not disclosed.