Monetizing computing resources on the blockchain

A while back, a blockchain startup approached me with their pitch, a decentralized social media application in which users can earn money by simply doing what they already do on other platforms, such posting updates, photos and videos.

I would have been intrigued had they sent me the message a couple of years ago. But not so much after observing the space for more several years.

Several blockchain applications profess to enable users to monetize various resources, whether it’s their unused storage and CPU power, or the tons of data they generate every day.

Regardless of whether they will succeed to deliver on their promises or not, these projects highlight one of the problems that haunts the centralized internet. Users are seldom rewarded for the great value they bring to platforms such as Facebook, Google and Amazon .

Blockchain applications suggest that decentralized alternatives to current services will give users the chance to collect their fair share of the revenue they generate with their participation in online ecosystems. It’s an enticing proposition since it doesn’t require users to do much more than what they’re already doing: send emails, browse websites, watch ads, keep the computer on…

But what exactly do you earn from monetizing your resources on the internet, and how accessible and reliable are your earning? Here’s what you need to know.

What can you sell?

A handful of blockchain platforms enable you to rent your unused storage, idle CPU cycles, and internet bandwidth with those who are in need. The premise is simple: You list your resources along with your payment terms on the application and get paid in the proprietary crypto-token of the application when others use them. Purchases are arranged, performed and paid peer-to-peer through smart contracts, bits of code that run on blockchain without the need for a centralized application server.

Examples include Golem and iExec, two decentralized marketplaces for computing power. Users can earn the platforms’ proprietary cryptocurrencies, GNT and RLC tokens respectively, by renting their CPU cycles to developers and users who want to run applications on the network. Golem and iExec aim to replace centralized cloud providers such as Amazon and Google, in which the service provider sets the rates and rakes in all the profits.

Storj and Filecoin are two distributed storage networks where users can earn cryptotokens for sharing their free hard drive space with the network. Both platforms are designed to provide infrastructure for various applications such as web hosting and streaming services. Gladius, a decentralized content delivery network (CDN) and DDoS mitigation solution, enables users to monetize their internet bandwidth to serve content from websites and services running on the network.

These applications provide a good opportunity to turn the hours that your computer sits idly in the home or office into a side income.

Other blockchain platforms enable you to monetize your data. An example is Datum, a decentralized marketplace for user data. Datum enables users to earn DAT tokens by choosing to share it with other organizations. Other players in the domain include Streamr, a real-time data-sharing platform geared toward the Internet of Things (IoT). With Streamr, users can earn DATAcoin tokens by sharing the data their connected devices generate with other devices that need it to carry out their functions and companies that use them for analytics and research.

Data is a huge market that is currently dominated by a few big players such as Google and Facebook. These companies hoard user data in their walled-garden silos and use them to make huge profits. Blockchain platforms give users the choice and power to claim their share of that market by giving them back the ownership of their data.

Matchpool is a decentralized social network that enables users to monetize their groups and online communities. Matchpool provides the decentralized equivalent of Facebook groups and provides tools for administrators to earn GUP tokens by setting fees on membership and access to content. And there’s Brave, the blockchain-based browser developed by the former CEO of Mozilla. Brave removes ads from websites and instead gives users the choice to earn Basic Attention Tokens (BAT) by opting to view ads.

How much do you earn?

It’s difficult to measure earnings on blockchain applications because most of them either haven’t launched yet or are in their early stages. Few of the companies I reached out to could provide stable numbers or average figures.

Also, the value of the resource you share on these platforms is often subject to supply-and-demand dynamics. For instance, iExec leaves it to the users to determine the price of their computational resources and doesn’t take any cut from their earnings. If there’s a large demand for decentralized CPU power, you’ll earn more from participating in the network.

Storj, the decentralized storage network, had the most accurate information to share. The platform provides a formula to calculate the monthly earnings of “farmers,” the users who share their free storage space with the network. Storj charges $0.015 per gigabyte of data stored and $0.05 per gigabyte downloaded, 60 percent of which goes to the farmers.

Several factors affect the final earnings, including whether the farmer nodes store primary or mirror copies of data, how long they participate in the network, and how well they perform in terms of up-time, bandwidth and response times. “If someone stored 1TB of data for the entire month, and that entire TB of data was downloaded once that month, they could potentially make $39,” said Philip Hutchins, CTO at Storj Labs. But the current average monthly payment for a Storj farmer node is around $2, according to the network data the company shared.

Storj has also launched partnerships with FileZilla, Microsoft and other companies to build decentralized apps on top of its network, which could increase demand for Storj space.

On Datum, the decentralized data market, users earn between $0.50 and $5 in DAT tokens for each promotional email they opt to open, according to Roger Haenni, the company’s CEO, though he did not share the details of how earnings are calculated. Currently the network supports monetizing email inboxes, but in the future, the company plans to provide users with the option to get paid for sharing various categories of data, such as the location data their phone collects, apps, services and websites they use, data that their smart gadgets collect and others.

That last bit sounds a bit invasive on user privacy. “This [data] is currently widely tracked by cookies from various ad networks,” explains Haenni. “However, the user is not asked to explicitly opt in to share this data nor does he get paid when this data is monetized.” Datum will give the chance to claim the money that’s already being made from their data.

The Datum network currently has 80,000 users, and since the launch of the Datum App in late December, users have collected 1.5 million DAT tokens, amounting to around $75,000.

Gladius, the decentralized CDN, doles out $0.03 in GLA tokens per gigabyte of bandwidth of data streamed through a node (however, the company’s website states that this is an estimate based on favorable market conditions). An internet connection with a 30 mbps upload speed shared with the network for eight hours a day could earn its owner around $49 per month.

What are the costs and risks?

In most cases, you’ve already paid for the resources you’ll be sharing on the blockchain, whether it’s your hard drive space, your CPU or your bandwidth (unless you’re on a metered connection, in which case sharing it would be unwise). However, you’ll have to factor in electricity costs of keeping your computer on, which varied depending on the region you live in.

Social and data-sharing platforms won’t have any extra costs, but you’ll be responsible for keeping the balance between sharing your data and preserving your privacy.

One of the real risks of earning cryptotokens is the constant price fluctuations. The value of what you earn today could double overnight—or drop by half in the same manner. This means you’ll have to choose between holding your tokens or cashing out. 

And there are always the risks of scams and failed projects that will absorb users’ funds and resources only to disappear and leave them out in the cold.

“Resource-sharing projects on top of the blockchain that allow users to control and profit from their own data will be the most profitable and successful projects in the future,” says Jared Tate, blockchain expert and the founder of DigiByte. However, Tate also notes that many of the current resource sharing platforms are PR projects that will never scale. 

“The majority of projects out there won’t be around in 5 years. Most of the projects don’t even have working software, just a white paper and some fancy graphics on a website,” Tate says. Some users evaluate projects by examining the market cap alone, which Tate believes is the absolute worst way to gauge a projects long term viability. “So many market caps are artificially inflated by developer pre-mines or deceptive coin counts,” he warns.

 

How do you deal with the liquidity problem?

 Another challenge users will have to overcome is what to do with the tokens they earn from monetizing their resources. For instance, if you earn Storj tokens from renting your free hard disk space, the only thing you can do with your earnings is, well, rent storage from other users, which doesn’t make sense since you already had an excess of it to begin with. 

Some platforms have multi-faceted economies that enable users to use their earned tokens for various purposes. For instance, in Flixxo, a decentralized streaming service, users can earn FLIXX tokens by sharing their free disk space and bandwidth to host content on the network. They can then use their earned tokens to consume videos published on the platform. But that is still a limited use case and might not be the problem they want to solve with their earnings.

Digital currencies and tokens have a liquidity problem. There are very few retailers and online services that accept Bitcoin as a method of payment, and even fewer that accept other cryptocurrencies. Users often must find some online exchange which matches buyers and sellers of various digital and fiat currencies. The process is slow and complicated and involves fees at different levels. 

An alternative is Bancor, a decentralized liquidity network built on top of the Ethereum blockchain. Supported by its own token, BNT, Bancor enables users to convert between tokens supported on its network without the need to find a buyer or seller. So, for instance, if you’ve earned an amount of RLC tokens from renting your idle CPU time on iExec, you can instantly trade it on Bancor for, say, MANA, the token that will let you purchase VR experiences on Decentraland. 

Bancor already lists several dozen tokens on its network and plans to add more in the future.

“The aim of this mathematic liquidity solution is to allow the long tail of tokens to emerge, by allowing any user generated currency to be viable on day one without needing to achieve massive trade volume in order to be listed and thus become liquid,” says Galia Benartzi, the co-founder of Bancor. “Great tokens will still rise, bad ones will fail, but all will have a chance to try.”

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

Facebook is still falling short on privacy, says German minister

Germany’s justice minister has written to Facebook calling for the platform to implement an internal “control and sanction mechanism” to ensure third-party developers and other external providers are not able to misuse Facebook data — calling for it to both monitor third party compliance with its platform policies and apply “harsh penalties” for any violations.

The letter, which has been published in full in local mediafollows the privacy storm that has engulfed the company since mid March when fresh revelations were published by the Observer of London and the New York Times — detailing how Cambridge Analytica had obtained and used personal information on up to 87 million Facebook users for political ad targeting purposes.

Writing to Facebook’s founder and CEO Mark Zuckerberg, justice minister Katarina Barley welcomes some recent changes the company has made around user privacy, describing its decision to limit collaboration with “data dealers” as “a good start”, for example.

However she says the company needs to do more — setting out a series of what she describes as “core requirements” in the area of data and consumer protection (bulleted below). 

She also writes that the Cambridge Analytica scandal confirms long-standing criticisms against Facebook made by data and consumer advocates in Germany and Europe, adding that it suggests various lawsuits filed against the company’s data practices have “good cause”.

Unfortunately, Facebook has not responded to this criticism in all the years or only insufficiently,” she continues (translated via Google Translate). “Facebook has rather expanded its data collection and use. This is at the expense of the privacy and self-determination of its users and third parties.”

“What is needed is that Facebook lives up to its corporate responsibility and makes a serious change,” she says at the end of the letter. “In interviews and advertisements, you have stated that the new EU data protection regulations are the standard worldwide for the social network. Whether Facebook consistently implements this view, unfortunately, seems questionable,” she continues, critically flagging Facebook’s decision to switch the data controller status of ~1.5BN international users this month so they will no longer be under the jurisdiction of EU law, before adding: “I will therefore keep a close eye on the further measures taken by Facebook.

Since revelations about Cambridge Analytica’s use of Facebook data snowballed into a global privacy scandal for the company this spring, the company has revealed a series of changes which it claims are intended to bolster data protection on its platform.

Although, in truth, many of the tweaks Facebook has announced were likely in train already — as it has been working for months (if not years) on its response to the EU’s incoming GDPR framework, which will apply from May 25.

Yet, even so, many of these measures have been roundly criticized by privacy experts, who argue they do not go far enough to comply with GDPR and will trigger legal challenges once the framework is being applied.

For example, a new consent flow, announced by Facebook last month, has been accused of being intentionally manipulative — and of going against the spirit of the new rules, at very least.

Barley picks up on these criticisms in her letter — calling specifically for Facebook to deliver:

  • More transparency for users
  • Real control of users’ data processing by Facebook
  • Strict compliance with privacy by default and consent in the entire ecosystem of Facebook
  • Objective, neutral, non-discriminatory and manipulation-free algorithms
  • More freedom of choice for users through various settings and uses

On consent, she emphasizes that under GDPR the company will need to obtain consent for each data use — and cannot bundle up uses to try to obtain a ‘lump-sum’ consent, as she puts it.

Yet this is pretty clearly exactly what Facebook is doing when it asks Europeans to opt into its face recognition technology, for example, by suggesting this could help protect users against strangers using their photos; and be an aid to visually impaired users on its platform; yet there’s absolutely no specific examples in the consent flow of the commercial uses to which Facebook will undoubtedly put the tech.

The minister also emphasizes that GDPR demands a privacy-by-default approach, and requires data collection to be minimized — saying Facebook will need to adapt all of its data processing operations in order to comply. 

Any data transfers from “friends” should also only take place with explicit consent in individual cases, she continues (consent that was of course entirely lacking in 2014 when Facebook APIs allowed a developer on its platform to harvest data on up to 87 million users — and pass the information to Cambridge Analytica).

Barley also warns explicitly that Facebook must not create shadow profiles, an especially awkward legal issue for Facebook which US lawmakers also questioned Zuckerberg closely about last month.

Facebook’s announcement this week, at its f8 conference, of an incoming Clear History button — which will give users the ability to clear past browsing data the company has gathered about them — merely underscores the discrepancies here, with tracked Facebook non-users not even getting this after-the-fact control, although tracked users also can’t ask Facebook never to track them in the first place.

Nor is it clear what Facebook does with any derivatives it gleans from this tracked personal data — i.e. whether those insights are also dissociated from an individual’s account.

Sure, Facebook might delete a web log of the sites you visited — like a gambling site or a health clinic — when you hit the button but that does not mean it’s going to remove all the inferences it’s gleaned from that data (and added to the unseen profile it holds of you and uses for ad targeting purposes).

Safe to say, the value of the Clear History button looks mostly as PR for Facebook — so the company can point to it and claim it’s offering users another ‘control’ as a strategy to try to deflect lawmakers’ awkward questions (just such disingenuousness was on ample show in Congress last month — and has also been publicly condemned by the UK parliament).

We asked Facebook our own series of questions about how Clear History operates, and why — for example — it is not offering users the ability to block tracking entirely. After multiple emails on this topic, over two days, we’re still waiting for the company to answer anything we asked.

Facebook’s processing of non-users’ data, collected via tracking pixels and social plugins across other popular web services, has already got Facebook into hot water with some European regulators. Under GDPR it will certainly face fresh challenges to any consent-less handling of people’s data — unless it radically rethinks its approach, and does so in less than a month. 

In her letter, Barley also raises concerns around the misuse of Facebook’s platform for political influence and opinion manipulation — saying it must take “all necessary technical and organizational measures to prevent abuse and manipulation possibilities (e.g. via fake accounts and social bots)”, and ensure the algorithms it uses are “objective, neutral and non-discriminatory”.

She says she also wants the company to disclose the actions it takes on this front in order to enable “independent review”.

Facebook’s huge sprawl and size — with its business consisting of multiple popular linked platforms (such as WhatsApp and Instagram), as well as the company deploying its offsite tracking infrastructure across the Internet to massively expand the reach of its ecosystem — “puts a special strain on the privacy and self-determination of German and European users”, she adds.

At the time of writing Facebook had not responded to multiple requests for comment about the letter.

How Facebook gives an asymmetric advantage to negative messaging

Few Facebook critics are as credible as Roger McNamee, the managing partner at Elevation Partners. As an early investor in Facebook, McNamee was only only a mentor to Mark Zuckerberg but also introduce him to Sheryl Sandberg.

So it’s hard to underestimate the significance of McNamee’s increasingly public criticism of Facebook over the last couple of years, particularly in the light of the growing Cambridge Analytica storm.

According to McNamee, Facebook pioneered the building of a tech company on “human emotions”. Given that the social network knows all of our “emotional hot buttons”, McNamee believes, there is “something systemic” about the way that third parties can “destabilize” our democracies and economies. McNamee saw this in 2016 with both the Brexit referendum in the UK and the American Presidential election and concluded that Facebook does, indeed, give “asymmetric advantage” to negative messages.

McNamee still believes that Facebook can be fixed. But Zuckerberg and Sandberg, he insists, both have to be “honest” about what’s happened and recognize its “civic responsibility” in strengthening democracy. And tech can do its part too, McNamee believes, in acknowledging and confronting what he calls its “dark side”.

McNamee is certainly doing this. He has now teamed up with ex Google ethicist Tristan Harris in the creation of The Center for Human Technology — an alliance of Silicon Valley notables dedicated to “realigning technology with humanity’s best interests.”

How Facebook Can Better Fight Fake News: Make Money Off the People Who Promote It

Facebook and other platforms are still struggling to combat the spread of misleading or deceptive “news” items promoted on social networks.

Recent revelations about Cambridge Analytica and Facebook’s slow corporate response have drawn attention away from this ongoing, equally serious problem: spend enough time on Facebook, and you are still sure to see dubious, sponsored headlines scrolling across your screen, especially during major news days when influence networks from inside and outside the United States rally to amplify their reach. And Facebook’s earlier announced plan to combat this crisis through simple user surveys does not inspire confidence.

As is often the case, the underlying problem is more about economics than ideology. Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash over substance in order to drive clicks.

Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them. Indeed, much of the bogus US political items generated during the 2016 election didn’t emanate from Russian agents, but fly-by-night operations churning out spurious fodder appealing to biases across the political spectrum. Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised on its platform.

I believe there is a better, proven, cost-effective solution Facebook could implement. Leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.

The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. The dot-com era dating site Hot or Not, for instance, ran into a moderation problem when it debuted a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc).

Users worked in pairs to vote on photos until a consensus was reached. Photos flagged by a strong majority of users were removed, and users who made the right decision were awarded points. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.

Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity. Perhaps users could opt in to be moderators, in exchange for rewards.

Applied to the problem of Facebook ads which promote deceptive news, this review process would work something like this:

  • A news site pays to advertise an article or video on Facebook

  • Facebook holds this payment in escrow

  • Facebook publishes the ad to a select number of Facebook users who’ve volunteered to rate news items as Reliable or Unreliable

  • If a supermajority of these Facebook reviewers (60% or more) rate the news to be Reliable, the ad is automatically published, and Facebook takes the advertising money

  • If the news item is flagged as Unreliable by 60% or more reviewers, it’s sent to Facebook’s internal review board

  • If the review board determines the news to be Reliable, the ad for the article is published on Facebook

  • If the review board deems it to be Unreliable, the ad for the article is not published, Facebook returns most of the ad payment to the media site — keeping 10-20% to reimburse the social network’s review process

(Photo by Alberto Pezzali/NurPhoto via Getty Images)

I’m confident a diverse array of users would consistently identify deceptive news items, saving Facebook countless hours in labor costs. And in the system I am describing, the company immunizes itself from accusations of political bias. “Sorry, Alex Jones,” Mark Zuckerberg can honestly say, “We didn’t reject your ad for promoting fake news — our users did.” Perhaps more key, not only will the social network save on labor costs, they will actually make money for removing fake news.

This strategy could also be adapted by other social media platforms, especially Twitter and YouTube. To make real headway against this epidemic, the leading Internet advertisers, chief among them Google, would also need to implement similar review processes. This filter system of consensus layers should also be applied to suspect content that’s voluntarily shared by individuals and groups, and the bot networks that amplify them.

To be sure, this would only put us somewhat ahead in the escalating arms race against forces still striving to erode our confidence in democratic institutions. Seemingly every week, a new headline reveals the challenge to be greater than what we ever imagined. So my purpose in writing this is to confront the excuse Silicon Valley usually offers, for not taking action: “But this won’t scale.” Because in this case, scale is precisely the power social networks have, to best defend us.

Facebook suspends Cambridge Analytica, the data analysis firm that worked for the Trump campaign

Facebook announced late Friday that it had suspended the account of Strategic Communication Laboratories, and its political data analytics firm Cambridge Analytica — which used Facebook data to target voters for President Donald Trump’s campaign in the 2016 election.

In a statement released by Paul Grewal, the company’s vice president and deputy general counsel, Facebook explained that the suspension was the result of a violation of its platform policies.

Cambridge Analytica apparently obtained Facebook user information without approval from the social network through work the company did with a University of Cambridge psychology professor named Dr. Aleksandr Kogan. Kogan developed an “thisisyourdigitallife” that purported to offer a personality prediction that would be “a research app used by psychologists”.

Apparently around 270,000 people downloaded the app and gave Kogan access to both geographic information, content they had liked, and limited information about users’ friends.

That information was then passed on to Cambridge Analytica and Christopher Wylie of Eunoia Technologies.

Facebook said it first identified the violation in 2015 and took action — apparently without informing users of the violation. The company demanded that Kogan, Cambridge Analytica and Wylie certify that they had destroyed the information.

Over the past few days, Facebook said it received reports (from sources it would not identify) that not all of the data Cambridge Analytica, Kogan, and Wylie collected had been deleted. While Facebook investigates the matter further, the company said it had taken the step to suspend the Cambridge Analytica account.

The UK-based Cambridge Analytica played a pivotal role in the U.S. presidential election, according to its own chief executive’s admission in an interview with TechCrunch late last year.

In the interview, Cambridge Analytica’s chief executive Alexander Nix said that his company had detailed hundreds of thousands of psychographic profiles of Americans throughout 2014 and 2015 (the time when the company was working with Sen. Ted Cruz on his campaign).

…We used psychographics all through the 2014 midterms. We used psychographics all through the Cruz and Carson primaries. But when we got to Trump’s campaign in June 2016, whenever it was, there it was there was five and a half months till the elections. We just didn’t have the time to rollout that survey. I mean, Christ, we had to build all the IT, all the infrastructure. There was nothing. There was 30 people on his campaign. Thirty. Even Walker it had 160 (it’s probably why he went bust). And he was the first to crash out. So as I’ve said to other of your [journalist] colleagues, clearly there’s psychographic data that’s baked-in to legacy models that we built before, because we’re not reinventing the wheel. [We’ve been] using models that are based on models, that are based on models, and we’ve been building these models for nearly four years. And all of those models had psychographics in them. But did we go out and rollout a long form quantitive psychographics survey specifically for Trump supporters? No. We just didn’t have time. We just couldn’t do that.

It’s likely that some of that psychographic data came from information culled by Kogan. The tools that Cambridge Analytica deployed have been at the heart of recent criticism of Facebook’s approach to handling advertising and promoted posts on the social media platform.

Nix, from Cambridge Analytica, acknowledged that advertising was ahead of most political messaging and that the tools used for creating campaigns could be effective in the political arena as well.

There’s no question that the marketing and advertising world is ahead of the political marketing the political communications world. And there are some things that I would definitely [say] I’m very proud of that we’re doing which are innovative. And there are some things which is best practice digital advertising, best practice communications which we’re taking from the commercial world and are bringing into politics.

Advertising agencies are using some of these techniques on a national scale. For us it’s been very refreshing, really breaking into the commercial and brand space… walking into a campaign where you’re basically trying to educate the market on stuff they simply don’t understand. You walk into a sophisticated brand or into an advertising agency, and the conversation [is sophisticated] You go straight down to: “Ah, so you’re doing a programmatic campaign, you can augment that with some linear optimized data… they understand it.” They know it’s their world, and now it comes down to the nuances. “So what exactly are you doing that’s going to be a bit more effective and give us an extra 3 percent or 4 percent there.” It’s a delight. You know these are professionals who really get this world and that’s where we want to be operating.

 

 

Actress Maisie Williams to launch Daisie, a social app for talent discovery and collaboration

 Actress Maisie Williams, best known for her role as Arya Stark on Game of Thrones, is the latest celeb to venture into tech entrepreneurship, with the launch of a new company aimed at connecting creatives, called Daisie. Available later this summer as a mobile app, Daisie will offer a platform where creators can network, like, share and collaborate on projects within a social networking… Read More

Wattpad moves into video with personal storytelling app, Raccoon

 Wattpad, the social publishing platform behind apps for sharing original stories and chat fiction, is today venturing into video with the launch of a new app called Raccoon. Unlike its predecessors, Raccoon will focus on non-fiction video-based storytelling, with the goal of connecting people who want to create and watch stories that either entertain or inspire. While Raccoon still fits… Read More