Few Facebook critics are as credible as Roger McNamee, the managing partner at Elevation Partners. As an early investor in Facebook, McNamee was only only a mentor to Mark Zuckerberg but also introduce him to Sheryl Sandberg.
So it’s hard to underestimate the significance of McNamee’s increasingly public criticism of Facebook over the last couple of years, particularly in the light of the growing Cambridge Analytica storm.
According to McNamee, Facebook pioneered the building of a tech company on “human emotions”. Given that the social network knows all of our “emotional hot buttons”, McNamee believes, there is “something systemic” about the way that third parties can “destabilize” our democracies and economies. McNamee saw this in 2016 with both the Brexit referendum in the UK and the American Presidential election and concluded that Facebook does, indeed, give “asymmetric advantage” to negative messages.
McNamee still believes that Facebook can be fixed. But Zuckerberg and Sandberg, he insists, both have to be “honest” about what’s happened and recognize its “civic responsibility” in strengthening democracy. And tech can do its part too, McNamee believes, in acknowledging and confronting what he calls its “dark side”.
Facebook and other platforms are still struggling to combat the spread of misleading or deceptive “news” items promoted on social networks.
Recent revelations about Cambridge Analytica and Facebook’s slow corporate response have drawn attention away from this ongoing, equally serious problem: spend enough time on Facebook, and you are still sure to see dubious, sponsored headlines scrolling across your screen, especially during major news days when influence networks from inside and outside the United States rally to amplify their reach. And Facebook’s earlier announced plan to combat this crisis through simple user surveys does not inspire confidence.
As is often the case, the underlying problem is more about economics than ideology. Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash over substance in order to drive clicks.
Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them. Indeed, much of the bogus US political items generated during the 2016 election didn’t emanate from Russian agents, but fly-by-night operations churning out spurious fodder appealing to biases across the political spectrum. Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised on its platform.
I believe there is a better, proven, cost-effective solution Facebook could implement. Leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.
The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. The dot-com era dating site Hot or Not, for instance, ran into a moderation problem when it debuted a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc).
Users worked in pairs to vote on photos until a consensus was reached. Photos flagged by a strong majority of users were removed, and users who made the right decision were awarded points. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.
Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity. Perhaps users could opt in to be moderators, in exchange for rewards.
Applied to the problem of Facebook ads which promote deceptive news, this review process would work something like this:
A news site pays to advertise an article or video on Facebook
Facebook holds this payment in escrow
Facebook publishes the ad to a select number of Facebook users who’ve volunteered to rate news items as Reliable or Unreliable
If a supermajority of these Facebook reviewers (60% or more) rate the news to be Reliable, the ad is automatically published, and Facebook takes the advertising money
If the news item is flagged as Unreliable by 60% or more reviewers, it’s sent to Facebook’s internal review board
If the review board determines the news to be Reliable, the ad for the article is published on Facebook
If the review board deems it to be Unreliable, the ad for the article is not published, Facebook returns most of the ad payment to the media site — keeping 10-20% to reimburse the social network’s review process
(Photo by Alberto Pezzali/NurPhoto via Getty Images)
I’m confident a diverse array of users would consistently identify deceptive news items, saving Facebook countless hours in labor costs. And in the system I am describing, the company immunizes itself from accusations of political bias. “Sorry, Alex Jones,” Mark Zuckerberg can honestly say, “We didn’t reject your ad for promoting fake news — our users did.” Perhaps more key, not only will the social network save on labor costs, they will actually make money for removing fake news.
This strategy could also be adapted by other social media platforms, especially Twitter and YouTube. To make real headway against this epidemic, the leading Internet advertisers, chief among them Google, would also need to implement similar review processes. This filter system of consensus layers should also be applied to suspect content that’s voluntarily shared by individuals and groups, and the bot networks that amplify them.
To be sure, this would only put us somewhat ahead in the escalating arms race against forces still striving to erode our confidence in democratic institutions. Seemingly every week, a new headline reveals the challenge to be greater than what we ever imagined. So my purpose in writing this is to confront the excuse Silicon Valley usually offers, for not taking action: “But this won’t scale.” Because in this case, scale is precisely the power social networks have, to best defend us.
Facebook announced late Friday that it had suspended the account of Strategic Communication Laboratories, and its political data analytics firm Cambridge Analytica — which used Facebook data to target voters for President Donald Trump’s campaign in the 2016 election.
In a statement released by Paul Grewal, the company’s vice president and deputy general counsel, Facebook explained that the suspension was the result of a violation of its platform policies.
Cambridge Analytica apparently obtained Facebook user information without approval from the social network through work the company did with a University of Cambridge psychology professor named Dr. Aleksandr Kogan. Kogan developed an “thisisyourdigitallife” that purported to offer a personality prediction that would be “a research app used by psychologists”.
Apparently around 270,000 people downloaded the app and gave Kogan access to both geographic information, content they had liked, and limited information about users’ friends.
That information was then passed on to Cambridge Analytica and Christopher Wylie of Eunoia Technologies.
Facebook said it first identified the violation in 2015 and took action — apparently without informing users of the violation. The company demanded that Kogan, Cambridge Analytica and Wylie certify that they had destroyed the information.
Over the past few days, Facebook said it received reports (from sources it would not identify) that not all of the data Cambridge Analytica, Kogan, and Wylie collected had been deleted. While Facebook investigates the matter further, the company said it had taken the step to suspend the Cambridge Analytica account.
In the interview, Cambridge Analytica’s chief executive Alexander Nix said that his company had detailed hundreds of thousands of psychographic profiles of Americans throughout 2014 and 2015 (the time when the company was working with Sen. Ted Cruz on his campaign).
…We used psychographics all through the 2014 midterms. We used psychographics all through the Cruz and Carson primaries. But when we got to Trump’s campaign in June 2016, whenever it was, there it was there was five and a half months till the elections. We just didn’t have the time to rollout that survey. I mean, Christ, we had to build all the IT, all the infrastructure. There was nothing. There was 30 people on his campaign. Thirty. Even Walker it had 160 (it’s probably why he went bust). And he was the first to crash out. So as I’ve said to other of your [journalist] colleagues, clearly there’s psychographic data that’s baked-in to legacy models that we built before, because we’re not reinventing the wheel. [We’ve been] using models that are based on models, that are based on models, and we’ve been building these models for nearly four years. And all of those models had psychographics in them. But did we go out and rollout a long form quantitive psychographics survey specifically for Trump supporters? No. We just didn’t have time. We just couldn’t do that.
It’s likely that some of that psychographic data came from information culled by Kogan. The tools that Cambridge Analytica deployed have been at the heart of recent criticism of Facebook’s approach to handling advertising and promoted posts on the social media platform.
Nix, from Cambridge Analytica, acknowledged that advertising was ahead of most political messaging and that the tools used for creating campaigns could be effective in the political arena as well.
There’s no question that the marketing and advertising world is ahead of the political marketing the political communications world. And there are some things that I would definitely [say] I’m very proud of that we’re doing which are innovative. And there are some things which is best practice digital advertising, best practice communications which we’re taking from the commercial world and are bringing into politics.
Advertising agencies are using some of these techniques on a national scale. For us it’s been very refreshing, really breaking into the commercial and brand space… walking into a campaign where you’re basically trying to educate the market on stuff they simply don’t understand. You walk into a sophisticated brand or into an advertising agency, and the conversation [is sophisticated] You go straight down to: “Ah, so you’re doing a programmatic campaign, you can augment that with some linear optimized data… they understand it.” They know it’s their world, and now it comes down to the nuances. “So what exactly are you doing that’s going to be a bit more effective and give us an extra 3 percent or 4 percent there.” It’s a delight. You know these are professionals who really get this world and that’s where we want to be operating.
Actress Maisie Williams, best known for her role as Arya Stark on Game of Thrones, is the latest celeb to venture into tech entrepreneurship, with the launch of a new company aimed at connecting creatives, called Daisie. Available later this summer as a mobile app, Daisie will offer a platform where creators can network, like, share and collaborate on projects within a social networking… Read More
Wattpad, the social publishing platform behind apps for sharing original stories and chat fiction, is today venturing into video with the launch of a new app called Raccoon. Unlike its predecessors, Raccoon will focus on non-fiction video-based storytelling, with the goal of connecting people who want to create and watch stories that either entertain or inspire. While Raccoon still fits… Read More
Amazon today is launching Amazon Spark, a new feature aimed at improving product discovery, which is seemingly inspired by Instagram and its use of shoppable photos. Similarly, Amazon Spark users are encouraged to post stories, ideas and images of products they love, which others can react to with comments and “smiles” – Amazon’s own version of the Like or Favorite… Read More
Instagram is already doing a lot to spot and censor posts that violate its community guidelines, like outright porn, but now it’s also taking steps to block out potentially sensitive material that might not technically run afoul of its rules. The social network is adding a blurred screen with a “Sensitive Content” warning on top of posts that fit this description, which… Read More
YouTube today is rolling out an upgrade to its comments system, with the goal of putting creators more in control of which comments get featured in the feed, as well as the ability to better interact with their viewers and fans. Along with the ability to pin comments to the top of the feed and hold back inappropriate comments for review, creators will also have their own usernames highlighted… Read More