3D printed guns are now legal… What’s next?

On Tuesday, July 10, the DOJ announced a landmark settlement with Austin-based Defense Distributed, a controversial startup led by a young, charismatic anarchist whom Wired once named one of the 15 most dangerous people in the world.

Hyper-loquacious and media-savvy, Cody Wilson is fond of telling any reporter who’ll listen that Defense Distributed’s main product, a gun fabricator called the Ghost Gunner, represents the endgame for gun control, not just in the US but everywhere in the world. With nothing but the Ghost Gunner, an internet connection, and some raw materials, anyone, anywhere can make an unmarked, untraceable gun in their home or garage. Even if Wilson is wrong that the gun control wars are effectively over (and I believe he is), Tuesday’s ruling has fundamentally changed them.

At about the time the settlement announcement was going out over the wires, I was pulling into the parking lot of LMT Defense in Milan, IL.

LMT Defense, formerly known as Lewis Machine & Tool, is as much the opposite of Defense Distributed as its quiet, publicity-shy founder, Karl Lewis, is the opposite of Cody Wilson. But LMT Defense’s story can be usefully placed alongside that of Defense Distributed, because together they can reveal much about the past, present, and future of the tools and technologies that we humans use for the age-old practice of making war.

The legacy machine

Karl Lewis got started in gunmaking back in the 1970’s at Springfield Armory in Geneseo, IL, just a few exits up I-80 from the current LMT Defense headquarters. Lewis, who has a high school education but who now knows as much about the engineering behind firearms manufacturing as almost anyone alive, was working on the Springfield Armory shop floor when he hit upon a better way to make a critical and failure-prone part of the AR-15, the bolt. He first took his idea to Springfield Armory management, but they took a pass, so he rented out a small corner in a local auto repair ship in Milan, bought some equipment, and began making the bolts, himself.

Lewis worked in his rented space on nights and weekends, bringing the newly fabricated bolts home for heat treatment in his kitchen oven. Not long after he made his first batch, he landed a small contract with the US military to supply some of the bolts for the M4 carbine. On the back of this initial success with M4 bolts, Lewis Machine & Tool expanded its offerings to include complete guns. Over the course of the next three decades, LMT grew into one of the world’s top makers of AR-15-pattern rifles for the world’s militaries, and it’s now in a very small club of gunmakers, alongside a few old-world arms powerhouses like Germany’s Heckler & Koch and Belgium’s FN Herstal, that supplies rifles to US SOCOM’s most elite units.

The offices of LMT Defense, in Milan, Ill. (Image courtesy Jon Stokes)

LMT’s gun business is built on high-profile relationships, hard-to-win government contracts, and deep, almost monk-like know-how. The company lives or dies by the skill of its machinists and by the stuff of process engineering — tolerances and measurements and paper trails. Political connections are also key, as the largest weapons contracts require congressional approval and months of waiting for political winds to blow in this or that direction, as countries to fall in and out of favor with each other, and paperwork that was delayed due to a political spat over some unrelated point of trade or security finally gets put through so that funds can be transfered and production can begin.

Selling these guns is as old-school a process as making them is. Success in LMT’s world isn’t about media buys and PR hits, but about dinners in foreign capitals, range sessions with the world’s top special forces units, booths at trade shows most of us have never heard of, and secret delegations of high-ranking officials to a machine shop in a small town surrounded by corn fields on the western border of Illinois.

The civilian gun market, with all of its politics- and event-driven gyrations of supply and demand, is woven into this stable core of the global military small arms market the way vines weave through a trellis. Innovations in gunmaking flow in both directions, though nowadays they more often flow from the civilian market into the military and law enforcement markets than vice versa. For the most part, civilians buy guns that come off the same production lines that feed the government and law enforcement markets.

All of this is how small arms get made and sold in the present world, and anyone who lived through the heyday of IBM and Oracle, before the PC, the cloud, and the smartphone tore through and upended everything, will recognize every detail of the above picture, down to the clean-cut guys in polos with the company logo and fat purchase orders bearing signatures and stamps and big numbers.

The author with LMT Defense hardware.

Guns, drugs, and a million Karl Lewises

This is the part of the story where I build on the IBM PC analogy I hinted at above, and tell you that Defense Distributed’s Ghost Gunner, along with its inevitable clones and successors, will kill dinosaurs like LMT Defense the way the PC and the cloud laid waste to the mainframe and microcomputer businesses of yesteryear.

Except this isn’t what will happen.

Defense Distributed isn’t going to destroy gun control, and it’s certainly not going to decimate the gun industry. All of the legacy gun industry apparatus described above will still be there in the decades to come, mainly because governments will still buy their arms from established makers like LMT. But surrounding the government and civilian arms markets will be a brand new, homebrew, underground gun market where enthusiasts swap files on the dark web and test new firearms in their back yards.

The homebrew gun revolution won’t create a million untraceable guns so much as it’ll create a hundreds of thousands of Karl Lewises — solitary geniuses who had a good idea, prototyped it, began making it and selling it in small batches, and ended up supplying a global arms market with new technology and products.

In this respect, the future of guns looks a lot like the present of drugs. The dark web hasn’t hurt Big Pharma, much less destroyed it. Rather, it has expanded the reach of hobbyist drugmakers and small labs, and enabled a shadow world of pharmaceutical R&D that feeds transnational black and gray markets for everything from penis enlargement pills to synthetic opioids.

Gun control efforts in this new reality will initially focus more on ammunition. Background checks for ammo purchases will move to more states, as policy makers try to limit civilian access to weapons in a world where controlling the guns themselves is impossible.

Ammunition has long been the crack in the rampart that Wilson is building. Bullets and casings are easy to fabricate and will always be easy to obtain or manufacture in bulk, but powder and primers are another story. Gunpowder and primers are the explosive chemical components of modern ammo, and they are difficult and dangerous to make at home. So gun controllers will seize on this and attempt to pivot to “bullet control” in the near-term.

Ammunition control is unlikely to work, mainly because rounds of ammunition are fungible, and there are untold billions of rounds already in civilian hands.

In addition to controls on ammunition, some governments will also make an effort at trying to force the manufacturers of 3D printers and desktop milling machines (the Ghost Gunner is the latter) to refuse to print files for gun parts.

This will be impossible to enforce, for two reasons. First, it will be hard for these machines to reliably tell what’s a gun-related file and what isn’t, especially if distributors of these files keep changing them to defeat any sort of detection. But the bigger problem will be that open-source firmware will quickly become available for the most popular printing and milling machines, so that determined users can “jailbreak” them and use them however they like. This already happens with products like routers and even cars, so it will definitely happen with home fabrication machines should the need arise.

Ammo control and fabrication device restrictions having failed, governments will over the longer term employ a two-pronged approach that consists of possession permits and digital censorship.

Photo courtesy of Getty Images: Jeremy Saltzer / EyeEm

First, governments will look to gun control schemes that treat guns like controlled substances (i.e. drugs and alchohol). The focus will shift to vetting and permits for simple possession, much like the gun owner licensing scheme I outlined in Politico. We’ll give up on trying to trace guns and ammunition, and focus more on authorizing people to possess guns, and on catching and prosecuting unauthorized possession. You’ll get the firearm equivalent of a marijuana card from the state, and then it won’t matter if you bought your gun from an authorized dealer or made it yourself at home.

The second component of future gun control regimes will be online suppression, of the type that’s already taking place on most major tech platforms across the developed world. I don’t think DefCad.com is long for the open web, and it will ultimately have as hard a time staying online as extremist sites like stormfront.org.

Gun CAD files will join child porn and pirated movies on the list of content it’s nearly impossible to find on big tech platforms like Facebook, Twitter, Reddit, and YouTube. If you want to trade these files, you’ll find yourself on sites with really intrusive advertising, where you worry a lot about viruses. Or, you’ll end up on the dark web, where you may end up paying for a hot new gun design with a cryptocurrency. This may be an ancap dream, but won’t be mainstream or user-friendly in any respect.

As for what comes after that, this is the same question as the question of what comes next for politically disfavored speech online. The gun control wars have now become a subset of the online free speech wars, so whatever happens with online speech in places like the US, UK, or China will happen with guns.

NHS Unveils Mobile App to Let Patients Book GP Visits Online

The British government has announced plans to launch a new NHS mobile app that will let patients in England make appointments with their doctor.

The app will also allow users to order repeat prescriptions, manage their long-term healthcare, see their medical records, and quickly access 111 for urgent queries.



In addition, users will have access to patient preferences related to data sharing, organ donation, and end-of-life care.

Health Secretary Jeremy Hunt described the app as a “birthday present from the NHS to the British people”, 70 years after the service was founded.

The NHS app is a world-first which will put patients firmly in the driving seat and revolutionize the way we access health services.

I want this innovation to mark the death-knell of the 8am scramble for GP appointments that infuriates so many patients.

Technology has transformed everyday life when it comes to banking, travel and shopping. Health matters much more to all of us, and the prize of that same digital revolution in healthcare isn’t just convenience but lives improved, extended and saved.

As the NHS turns 70 and we draw up a long-term plan for the NHS on the back of our £394 million a week funding boost, it’s time to catch up and unleash the power of technology to transform everyday life for patients.

“The new app will put the NHS into the pocket of everyone in England but it is just one step on the journey,” said Matthew Swindells, NHS England National Director of Operations and Information. “We are also developing an NHS Apps Library and putting free NHS Wi-Fi in GP surgeries and hospitals.”



Developed by NHS Digital and NHS England, the app will enter its testing phase in September and then roll out officially in December. It will be available on iOS devices through the App Store, as well as on Android phones via Google Play.

Discuss this article in our forums

F-Secure to buy MWR InfoSecurity for ~$106M+ to offer better threat hunting

The ongoing shift of emphasis in the cyber security industry from defensive, reactive actions towards pro-active detection and response has fueled veteran Finnish security company F-Secure’s acquisition of MWR InfoSecurity, announced today.

F-Secure is paying £80 million (€91,6M) in cash to purchase all outstanding shares in MWR InfoSecurity, funding the transaction with its own cash reserves and a five-year bank loan. In addition, the terms include an earn-out of a maximum of £25M (€28,6M) in cash to be paid after 18 months of the completion subject to the achievement of agreed business targets for the period from 1 July, 2018, until 31 December, 2019.

F-Secure says the acquisition will enable it to offer its customers access to the more offensive skillsets needed to combat targeted attacks — specialist capabilities that most companies are not likely to have in-house.

It points to detection and response solutions (EDR) and managed detection and response services (MDR) as one of the fastest growing market segments in the security space. And says the acquisition makes it the largest European single source of cyber security services and detection and response solutions, positioning it to cater to both mid-market companies and large enterprises globally.

“The acquisition brings MWR InfoSecurity’s industry-renowned technologies to F-Secure making our detection and response offering unrivaled,” said F-Secure CEO Samu Konttinen in a statement. “Their threat hunting platform (Countercept) is one of the most advanced in the market and is an excellent complement to our existing technologies.”

As well as having experts in-house skilled in offensive techniques, MWR InfoSecurity — a UK company that was founded in 2002 — is well known for its technical expertise and research.

And F-Secure says it expects learnings from major incident investigations and targeted attack simulations to provide insights that can be fed directly back into product creation, as well as be used to upgrade its offerings to reflect the latest security threats.

MWR InfoSecurity also has a suite of managed phishing protection services (phishd) which F-Secure also says will also enhance its offering.

The acquisition is expected to close in early July, and will add around 400 employees to F-Secure’s headcount. MWR InfoSecurity’s main offices are located in the UK, the US, South Africa and Singapore.

“I’m thrilled to welcome MWR InfoSecurity’s employees to F-Secure. With their vast experience and hundreds of experts performing cyber security services on four continents, we will have unparalleled visibility into real-life cyber attacks 24/7,” added Konttinen. “This enables us to detect indicators across an incredible breadth of attacks so we can protect our customers effectively. As most companies currently lack these capabilities, this represents a significant opportunity to accelerate F-Secure’s growth.”

“We’ve always relied on research-driven innovations executed by the best people and technology. This approach has earned MWR InfoSecurity the trust of some of the largest organizations in the world,” added MWR InfoSecurity CEO, Ian Shaw, who will be joining F-Secure’s leadership team after the transaction closes. “We see this approach thriving at F-Secure, and we look forward to working together so that we can break new ground in the cyber security industry.”

The companies will be holding a webcast to provide more detail on the news for investors and analysts later today, at 13:30 EEST.

Celebrity funds from Jay Z, Will Smith and Robert Downey Jr. are backing a life insurance startup

Ethos, the company that bills itself as making life insurance accessible, affordable and simple, has officially come out of stealth with an $11.5 million investment led by on of the world’s top venture firms, Sequoia Capital, and additional participation from the family offices of Hollywood’s biggest stars and an NBA all-star.

Jay Z’s Roc Nation, the family funds of Kevin Durant, Robert Downey Jr. and Will Smith all participated in the new round for Ethos, and Sequoia Partner Roelof Botha is taking a seat on the company’s board. Because nothing says star power like a life insurance startup.

The life insurance market is one that’s been attracting interest from venture investors for a little over a year now. Companies like England’s Anorak, HealthIQ, Ladder, Mira Financial, France’s Alan, which is backed by Partech Investments (among others), Fabric, and Quilt, are all pitching life insurance products as well.

Ethos is licensed in 49 states, which is pretty comparable to the offering from providers like Haven Life, the Mass Mutual-backed life insurance product.

What has made the life insurance market interesting for investors is the fact that consumers’ interest in it continues to decline. Whether it’s because no one trusts insurers to actually pay out, or because Americans are putting their faith in the anti-aging technologies from funds like the Longevity Fund, folks just aren’t buying insurance products the way they used to.

So when investors see the numbers of users of a formerly ubiquitous product decline from 77% in 1989 to below 60% in 2018, the assumption is that there’s room for new companies to come in and provide better service.

Scads of investors have taken the same bet, which makes Ethos a marketing play as much as anything else. In the company’s press release it touts the fast, easy, and inexpensive process for getting a quote.

The initial process requires only four questions to et a quote and a ten minute survey to get a policy (in most cases). The company says 99% of its applicants don’t need a medical exam or blood test to get a policy.

What may have been most interesting to investors is the pedigree of the company’s co-founders. Peter Colis and Lingke Wang have both worked in the insurance industry before. They previously co-founded a life insurance marketplace called, Ovid Life

“Life insurance is critical for families, but the process is broken for those who want and need it,” said Peter Colis. “We are consumer advocates, intensely focused on expanding life insurance accessibility to the millions of US families who have college debt, mortgages​, spouses and children​ to care for, and who want to be financially empowered to live their lives without worry.”

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place.

The audit (full report here) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing.

“This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.”

Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document raising questions over the broad scope of the data transfer; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined by the two main parties involved in the project.

In November 2016 the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps.

They also went on to roll out the Streams app for use on patients in multiple NHS hospitals — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement.

And just over a year ago the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind.

The audit of the Streams project was a requirement of the ICO.

Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action.

In a statement on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.”

In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.”

So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

And Wood’s statement pointedly reiterates that the ICO’s investigation “found a number of shortcomings in the way patient records were shared for this trial”.

“[P]art of the undertaking committed Royal Free to commission a third party audit. They have now done this and shared the results with the ICO. What’s important now is that they use the findings to address the compliance issues addressed in the audit swiftly and robustly. We’ll be continuing to liaise with them in the coming months to ensure this is happening,” he adds.

“It’s important that other NHS Trusts considering using similar new technologies pay regard to the recommendations we gave to Royal Free, and ensure data protection risks are fully addressed using a Data Protection Impact Assessment before deployment.”

While the report is something of a frustration, given the glaring historical omissions, it does raise some points of interest — including suggesting that the Royal Free should probably scrap a Memorandum of Understanding it also inked with DeepMind, in which the pair set out their ambition to apply AI to NHS data.

This is recommended because the pair have apparently abandoned their AI research plans.

On this Linklaters writes: “DeepMind has informed us that they have abandoned their potential research project into the use of AI to develop better algorithms, and their processing is limited to execution of the NHS AKI algorithm… In addition, the majority of the provisions in the Memorandum of Understanding are non-binding. The limited provisions that are binding are superseded by the Services Agreement and the Information Processing Agreement discussed above, hence we think the Memorandum of Understanding has very limited relevance to Streams. We recommend that the Royal Free considers if the Memorandum of Understanding continues to be relevant to its relationship with DeepMind and, if it is not relevant, terminates that agreement.”

In another section, discussing the NHS algorithm that underpins the Streams app, the law firm also points out that DeepMind’s role in the project is little more than helping provide a glorified app wrapper (on the app design front the project also utilized UK app studio, ustwo, so DeepMind can’t claim app design credit either).

“Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking. It does not, by any measure, involve artificial intelligence or machine learning or other advanced technology. The benefits of the Streams App instead come from a very well-designed and user-friendly interface, backed up by solid infrastructure and data management that provides AKI alerts and contextual clinical information in a reliable, timely and secure manner,” Linklaters writes.

What DeepMind did bring to the project, and to its other NHS collaborations, is money and resources — providing its development resources free for the NHS at the point of use, and stating (when asked about its business model) that it would determine how much to charge the NHS for these app ‘innovations’ later.

Yet the commercial services the tech giant is providing to what are public sector organizations do not appear to have been put out to open tender.

Also notably excluded in the Linklaters’ audit: Any scrutiny of the project vis-a-vis competition law, public procurement law compliance with procurement rules, and any concerns relating to possible anticompetitive behavior.

The report does highlight one potentially problematic data retention issue for the current deployment of Streams, saying there is “currently no retention period for patient information on Streams” — meaning there is no process for deleting a patient’s medical history once it reaches a certain age.

“This means the information on Streams currently dates back eight years,” it notes, suggesting the Royal Free should probably set an upper age limit on the age of information contained in the system.

While Linklaters largely glosses over the chequered origins of the Streams project, the law firm does make a point of agreeing with the ICO that the original privacy impact assessment for the project “should have been completed in a more timely manner”.

It also describes it as “relatively thin given the scale of the project”.

Giving its response to the audit, health data privacy advocacy group MedConfidential — an early critic of the DeepMind data-sharing arrangement — is roundly unimpressed, writing: “The biggest question raised by the Information Commissioner and the National Data Guardian appears to be missing — instead, the report excludes a “historical review of issues arising prior to the date of our appointment”.

“The report claims the ‘vital interests’ (i.e. remaining alive) of patients is justification to protect against an “event [that] might only occur in the future or not occur at all”… The only ‘vital interest’ protected here is Google’s, and its desire to hoard medical records it was told were unlawfully collected. The vital interests of a hypothetical patient are not vital interests of an actual data subject (and the GDPR tests are demonstrably unmet).

“The ICO and NDG asked the Royal Free to justify the collection of 1.6 million patient records, and this legal opinion explicitly provides no answer to that question.”

Cambridge Analytica’s Nix said it licensed ‘millions of data points’ from Acxiom, Experian, Infogroup to target US voters

The repeat grilling by the U.K. parliament’s DCMS committee today of Alexander Nix, the former CEO of the now ex company Cambridge Analytica — aka the controversial political and commercial ad agency at the center of a Facebook data misuse scandal — was not able to shed much new light on what may or may not have been going on inside the company.

But one nugget of information Nix let slip were the names of specific data aggregators he said Cambridge Analytica had bought “consumer and lifestyle” information on U.S. voters from, to link to voter registration data it also paid to acquire — apparently using that combined database to build models to target American voters in the 2016 presidential election, rather than using data improperly obtained from Facebook.

This is more information than Cambridge Analytica has thus far disclosed to one U.S. voter, professor David Carroll, who in January last year lodged a subject access request with the U.K.-based company after learning it had processed his personal information — only to be fobbed off with a partial disclosure.

Carroll persisted, and made a complaint to the U.K.’s data protection watchdog, and last month the ICO ordered Cambridge Analytica to provide him with all the data it held on him. The deadline for that passed yesterday — with no response.

The committee questioned Nix closely over responses he had given it at his earlier appearance in February, when he denied that Cambridge Analytica used Facebook data as the foundational data set for its political ad targeting business.

He had instead said that the work Dr. Aleksandr Kogan did for the company was “fruitless” and thus that the Facebook data Kogan had harvested and supplied to it had not been used.

“It wasn’t the foundational data set on which we built our company,” said Nix today. “Because we went out and we licensed millions of data points on American individuals from very large reputable data aggregators and data vendors such as Acxiom, Experian, Infogroup. That was the cornerstone of our data base together with political data — voter file data, I beg your pardon — which again is commercially available in the United States. That was the cornerstone of our company and on which we continued to build the company after we realized that the GSR data was fruitless.”

“The data that Dr. Kogan gave to us was modeled data and building a model on top of a model proved to be less statistically accurate… than actually just using Facebook’s own algorithms for placing advertising communications. And that was what we found out,” he added. “So I stand by that statement that I made to you before — and that was echoed and amplified in much more technical detail by Dr. Kogan.”

And Kogan did indeed play down the utility of the work he did for Cambridge Analytica — claiming it was essentially useless when he appeared before the committee back in April.

Asked about the exact type of data Cambridge Analytica/SCL acquired and processed from data brokers, Nix told the committee: “This is largely — largely — consumer and lifestyle data. So this is data on, for instance, loyalty card data, transaction data, this is data that pertains to lifestyle choices, such as what car you drive or what magazines you read. It could be data on consumer habits. And together with some demographic and geographic data — and obviously the voter data, which is very important for U.S. politics.”

We’ve asked the three data brokers named by Nix to confirm Cambridge Analytica was a client of theirs, and the types of data it licensed from them, and will update this report with any response.

Fake news committee told it’s been told fake news

What was most notable on this, Nix’s second appearance in front of the DCMS committee — which is investigating the role and impact of fake news/online disinformation on the political process — were his attempts to shift the spotlight via a string of defiant denials that there was much of a scandal to see here.

He followed a Trumpian strategy of trying to cast himself (and his former company) as victims — framing the story as a liberal media conspiracy and claiming no evidence of wrongdoing or unethical behavior had been produced.

Cambridge Analytica whistleblower Chris Wylie, who Nix had almost certainly caught sight of sitting in the public gallery, was described as a “bitter and jealous” individual who had acted out of resentment and spite on account of the company’s success.

Though the committee pushed back against that characterization, pointing out that Wylie has provided ample documents backing up his testimony, and that it has also taken evidence from multiple sources — not just from one former employee.

Nix did not dispute that the Facebook data-harvesting element of the scandal had been a “debacle,” as he put it.

Though he reiterated Cambridge Analytica’s previous denial that it was ever the recipient of the full data set Kogan acquired from Facebook — which Facebook confirmed in April consisted of information on as many as 87 million of its users — saying it “only received data on about 26 million-27 million individuals in the USA.”

He also admitted to personally being “foolish” in what he had been caught saying to an undercover Channel 4 reporter — when he had appeared to suggest Cambridge Analytica used tactics such as honeytraps and infiltration to gain leverage against clients’ political opponents (comments that got him suspended as CEO), saying he had only been talking in hypotheticals in his “overzealousness to secure a contract” — and once again painting himself as the victim of the “skillful manipulation of a journalist.”

He also claimed the broadcaster had taken his remarks out of context, claiming too that they had heavily edited the footage to make it look worse (a claim Channel 4 phoned in to the committee to “heavily” refute during the session).

But those sole apologetic notes did not raise the tone of profound indignation Nix struck throughout almost the entire session.

He came across as poised and well-versed in his channeled outrage. Though he has of course had plenty of time since his earlier appearance — when the story had not yet become a major scandal — to construct a version of events that could best serve to set the dial to maximum outrage.

Nix also shut down several lines of the committee’s questions, refusing to answer whether Cambridge Analytica/SCL had gone on to repeat the Facebook data-harvesting method at the heart of the scandal themselves, for example.

Nor would he disclose who the owners and shareholders of Cambridge Analytica and SCL Group are — claiming in both cases that ongoing investigations prevented him from doing so.

Though, in the case of the Information Commission’s Office’s ongoing investigation into social media analytics and political campaigning — which resulted in the watchdog raiding the offices of Cambridge Analytica in March — committee chair Damian Collins made a point of stating the ICO had assured it it has no objection to Nix answering its questions.

Nonetheless Nix declined.

He also refused to comment on fresh allegations printed in the FT suggesting he had personally withdrawn $8 million from Cambridge Analytica before the company collapsed into administration.

Some answers were forthcoming when the committee pressed him on whether Aggregate IQ, a Canadian data company that has been linked to Cambridge Analytica, and which Nix described today as a “subcontractor” for certain pieces of work, had ever had access to raw data or modeled data that Cambridge Analytica held.

The committee’s likely interest in pursing that line of questioning was to try to determine whether AIQ could have gained access to the cache of Facebook user data that found its way (via Kogan) to Cambridge Analytica — and thus whether it could have used it for its own political ad targeting purposes.

AIQ received £3.5 million from leave campaign groups in the run up to the U.K.’s 2016 EU referendum campaign, and has been described by leave campaigners as instrumental in securing their win, though exactly where it obtained data for targeting referendum ads has been a key question for the enquiry.

On this Nix said: “It wouldn’t be unusual for AIQ or Cambridge Analytica to work on a client’s data sets… And to have access to the data whilst we were working on them. But that didn’t entitle us to have any privileges over that data or any wherewithal to make a copy or retain any of that data ourselves.

“The relationship with AIQ would not have been dissimilar to that — as a subcontractor who was brought in to assist us on projects, they would have had, possibly, access to some of the data… whether that was modeled data or otherwise. But again that would be covered by the contract relationship that we have with them.”

Though he also said he couldn’t give a concrete answer on whether or not AIQ had had access to any raw data, adding: “I did speak to my data team prior to this hearing and they assured me there was no raw data that went into the Rippon platform [voter engagement platform AIQ built for Cambridge Analytica]. I can only defer to their expertise.”

Also on this, in prior evidence to the committee Facebook said it did not believe AIQ had used the Facebook user data obtained via Kogan’s apps for targeting referendum ads because the company had used email address uploads to Facebook’s ad platform for targeting “many” of its ads during the referendum — and it said Kogan’s app had not gathered the email addresses of app installers or their friends.

(And in its evidence to the committee, AIQ’s COO Jeff Silvester also claimed: “The only personal information we use in our work is that which is provided to us by our clients for specific purposes. In doing so, we believe we comply with all applicable privacy laws in each jurisdiction where we work.”)

Today Nix flat denied that Cambridge Analytica had played any role in the U.K.’s referendum campaign, despite the fact it was already known to have done some “scoping work” for UKIP, and which it did invoice the company for (but claims not to have been paid). Work which Nix did not deny had taken place but which he downplayed.

“We undertook some scoping work to look at these data. Unfortunately, whilst this work was being undertaken, we did not agree on the terms of a contract, as a consequence the deliverables from this work were not handed over, and the invoice was not paid. And therefore the Electoral Commission was absolutely satisfied that we did not do any work for Leave.EU and that includes for UKIP,” he said.

“At times we undertake eight, nine, 10 national elections a year somewhere around the world. We’ve never undertaken an election in the U.K. so I stand by my statement that the U.K. was not a target country of interest to us. Obviously the referendum was a unique moment in international campaigning and for that reason it was more significant than perhaps other opportunities to work on political campaigns might have been which was why we explored it. But we didn’t work on that campaign either.”

In a less comfortable moment for Nix, committee member Christian Matheson referred to a Cambridge Analytica document that the committee had obtained — described as a “digital overview” — and which listed “denial of service attacks” among the “digital interventions” apparently being offered by it as services.

Did you ever undertake any denial of service attacks, Nix was asked?

“So this was a company that we looked at forming, and we never formed. And that company never undertook any work whatsoever,” he responded. “In answer to your question, no we didn’t.”

Why did you consider it, wondered Matheson?

“Uh, at the time we were looking at, uh, different technologies, expanding into different technological areas and, uh, this seemed like, uh, an interesting, uh, uh, business, but we didn’t have the capability was probably the truth to be able to deliver meaningfully in this business,” said Nix. “So.”

Matheson: “Was it illegal at that time?”

Nix: “I really don’t know. I can’t speak to technology like that.”

Matheson: “Right. Because it’s illegal now.”

Nix: “Right. I don’t know. It’s not something that we ever built. It’s not something that we ever undertook. Uh, it’s a company that was never realized.”

Matheson: “The only reason I ask is because it would give me concern that you have the mens rea to undertake activities which are, perhaps, outside the law. But if you never went ahead and did it, fair enough.”

Another moment of discomfort for Nix was when the committee pressed him about money transfers between Cambridge Analytica/SCL’s various entities in the U.S. and U.K. — pointing out that if funds were being shifted across the Atlantic for political work and not being declared that could be legally problematic.

Though he fended this off by declining to answer — again citing ongoing investigations.

He was also asked where the various people had been based when Cambridge Analytica had been doing work for U.S. campaigns and processing U.S. voters’ data — with Collins pointing out that if that had been taking place outside the U.S. it could be illegal under U.S. law. But again he declined to answer.

“I’d love to explain this to you. But this again touches on some of these investigations — I simply can’t do that,” he said.

Amazon latest to face UK complaint over ‘bogus self-employment’

Amazon is the latest tech giant to be targeted by a legal challenge in the UK related to gig economy working practices.

The UK’s GMB Union is filing suit on behalf of couriers for three delivery companies used by Amazon — accusing the suppliers of making bogus claims that the delivery drivers were self employed, and thus denying them employment rights such as the national minimum wage and holiday pay.

The three Amazon suppliers in question are: Prospect Commercials Limited, Box Group Limited and Lloyd Link Logistics Limited.

The GMB Union says one of the drivers involved in the case recounted his experience of leaving the house at 6am, not returning from work until 11pm — and still having £1 per undelivered parcel deducted from his wages.

On more than one occasion the driver was also told he would not be paid if he did not complete a route — and it said he had sometimes driven when “half asleep at the wheel” in order to ensure he got paid.

Two of the three claimants in the lawsuit are also claiming whistleblower status, saying they were dismissed after they raised concerns about working practices. Among their claims are that —

  • the number of parcels allocated to drivers resulted in excessive hours and/or driving unsafely to meet targets;
  • drivers were expected to wait a significant time to load their vans, extending their working hours;
  • drivers were driving whilst tired, which posed a threat to their safety and other road users; and
  • drivers were being underpaid and not being paid amounts that they were contractually entitled to

The GMB Union says these whistleblowing claims are also being brought directly against Amazon on the basis that it was the company who determined the way the drivers should work.

In a statement, Tim Roache, GMB general secretary, told us: “Amazon is a global company that makes billions. It’s absolutely galling that they refuse to afford the people who make that money for them even the most basic rights, pay and respect. The day to day reality for many of our members who deliver packages for Amazon, is unrealistic targets, slogging their guts out only to have deductions made from their pay when those targets aren’t met and being told they’re self-employed without the freedom that affords.

“Companies like Amazon and their delivery companies can’t have it both ways — they can’t decide they want all of the benefits of having an employee, but refuse to give those employees the pay and rights they’re entitled to. Guaranteed hours, holiday pay, sick pay, pension contributions are not privileges companies can dish out when they fancy. They are the legal right of all UK workers, and that’s what we’re asking the courts to rule on.”

Amazon UK declined to answer any specific questions but a spokesperson sent us this statement:

Our delivery providers are contractually obligated to ensure drivers they engage receive the National Living Wage and are expected to pay a minimum of £12 per hour, follow all applicable laws and driving regulations and drive safely. Allegations to the contrary do not represent the great work done by around 100 small businesses generating thousands of work opportunities for delivery drivers across the UK.

Amazon is proud to offer a wide variety of work opportunities across Britain—full-time or part-time employment, or be your own boss. Last year we created 5,000 new permanent jobs on top of thousands of opportunities for people to work independently with the choice and flexibility of being their own boss—either through Amazon Logistics, Amazon Flex, or Amazon Marketplace.

The legal challenge is just the latest in the UK related to gig economy employment classifications. The most high profile to date involves Uber — which in October 2016 lost an employment tribunal which had challenged the self-employed status of a group of Uber drivers, with judges deeming them to be workers.

Uber has also since lost an appeal against the ruling but is continuing to appeal. Yet at the same time the company has announced personal injury and illness insurance products for drivers and riders in region — in what looks very much like an effort to shrink its legal liabilities as gig economy conditions come under increased legal and political scrutiny in Europe.

Complaints related to gig economy working conditions — and including delivery companies specifically — have been facing parliamentary scrutiny in the UK for many months now.

In parallel, the UK government has been reviewing employment law, including to take account of technology-driven changes to work and working patterns. And in February it announced a package of labor market reforms intended to “build an economy that works for everyone” — with the government making itself accountable for what it dubbed “good quality work” not just the quantity of jobs that are available.

The reforms were billed as expanding workers rights — with the government claiming that “millions” of workers would get new day-one rights, as well as having their rights bolstered by tougher enforcement for sick and holiday pay.

Although it also announced four consultations to help feed the reforms. So their full and final shape isn’t clear yet. And court decisions flowing from gig economy legal challenges are likely to be influential in shaping the future employment law.

Amazon has faced other concerns related to its working practices in the UK. Earlier this month the FT reported on a separate GMB Union investigation related to working practices inside Amazon’s UK warehouses — which have been the focus of long-standing concerns over pay and working conditions.

The union filed Freedom of Information requests with ambulance services near the warehouses and said it found that ambulances had been called to the centers 600 times in the last three years. According to its investigation there were 115 call-outs to just one Amazon center, in Rugeley, near Birmingham, which employs more than 1,800 people. Whereas it said it found just eight ambulance calls over the same period from a nearby Tesco warehouse — where 1,300 people work.

However Amazon told the newspaper that most of the call-outs were associated with “personal health events”, rather than being work related, adding: “It is simply not correct to suggest that we have unsafe working conditions based on this data or on unsubstantiated anecdotes.”

Brexit blow for UK’s hopes of helping set AI rules in Europe

The UK’s hopes of retaining an influential role for its data protection agency in shaping European Union regulations post-Brexit — including helping to set any new Europe-wide rules around artificial intelligence — look well and truly dashed.

In a speech at the weekend in front of the International Federation for European Law, the EU’s chief Brexit negotiator, Michel Barnier, shot down the notion of anything other than a so-called ‘adequacy decision’ being on the table for the UK after it exits the bloc.

If granted, an adequacy decision is an EU mechanism for enabling citizens’ personal data to more easily flow from the bloc to third countries — as the UK will be after Brexit.

Such decisions are only granted by the European Commission after a review of a third country’s privacy standards that’s intended to determine that they offer essentially equivalent protections as EU rules.

But the mechanism does not allow for the third country to be involved, in any shape or form, in discussions around forming and shaping the EU’s rules themselves. So, in the UK’s case, the country would be going from having a seat at the rule-making table to being shut out of the process entirely — at time when the EU is really setting the global agenda on digital regulations.

“The United Kingdom decided to leave our harmonised system of decision-making and enforcement. It must respect the fact that the European Union will continue to work on the basis of this system, which has allowed us to build a single market, and which allows us to deepen our single market in response to new challenges,” said Barnier in Lisbon on Saturday.

“And, as indicated in the European Council guidelines, the UK must understand that the only possibility for the EU to protect personal data is through an adequacy decision. It is one thing to be inside the Union, and another to be outside.”

“Brexit is not, and never will be, in the interest of EU businesses,” he added. “And it will especially run counter to the interests of our businesses if we abandon our decision-making autonomy. This autonomy allows us to set standards for the whole of the EU, but also to see these standards being replicated around the world. This is the normative power of the Union, or what is often called ‘the Brussels effect’.

“And we cannot, and will not, share this decision-making autonomy with a third country, including a former Member State who does not want to be part of the same legal ecosystem as us.”

Earlier this month the UK’s Information Commissioner, Elizabeth Denham, told MPs on the UK parliament’s committee for exiting the European Union that a bespoke data agreement that gave the ICO a continued role after Brexit would be a far superior option to an adequacy agreement — pointing out that the UK stands to lose influence at a time when the EU is setting global privacy standards via the General Data Protection Regulation (GDPR), which came into full force last Friday.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators,” she warned.

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table.”

She also pointed out that without a bespoke arrangement to accommodate the ICO her office would also be shut out of participating in the GDPR’s one-stop shop, which allows EU data protection agencies to work together and co-ordinate regulatory actions, and which she said “would bring huge advantages to both sides and also to British businesses”.

Huge advantages that the UK stands to lose as a result of Brexit.

With the ICO being excluded from participating in GDPR’s one-stop shop mechanism, it also means UK businesses will have to choose an alternative data protection agency within the EU to act as their lead regulator after Brexit — putting yet another burden on startups as they will need to build new relationships with a regulator in the EU.

The Irish Data Protection Commission seems the likely candidate for UK companies to look to after Brexit, when the ICO is on the side lines of GDPR, given shared language and proximity. (And Ireland’s DPC has been ramping up its headcount in anticipation of handling more investigations as a result of the new regulation.)

But UK businesses would clearly prefer to be able to continue working with their domestic regulator. Unfortunately, though, Brexit closes the door on that option.

We’ve reached out to the ICO for comment and will update this story with any response.

The UK government has committed to aligning the country with GDPR regardless of Brexit — as it seeks to avoid the economic threat of EU-UK data flows being cut off if it’s not judged to be providing adequate data protection.

Looking ahead that also essentially means the UK will need to keep its regulatory regime aligned with the EU’s in perpetuity — or risk being deemed inadequate, with, once again, the risk of data flows being cut of (or at very least businesses scrambling to put in place alternative legal arrangements to authorize their data flows, and saddled with the expense of doing so, as happened when Safe Harbor was struck down in 2015).

So, thanks to Brexit, it will be the rest of Europe setting the agenda on regulating AI — with the UK bound to follow.