Facebook prototypes tool to show how many minutes you spend on it

Are you ready for some scary numbers? After months of Mark Zuckerberg talking about how “Protecting our community is more important than maximizing our profits,” Facebook is preparing to turn that commitment into a Time Well Spent product.

Buried in Facebook’s Android app is an unreleased “Your Time on Facebook” feature. It shows the tally of how much time you spent on the Facebook app on your phone on each of the last seven days, and your average time spent per day. It lets you set a daily reminder that alerts you when you’ve reached your self-imposed limit, plus a shortcut to change your Facebook notification settings.

Facebook confirmed the feature development to TechCrunch, with a spokesperson telling us, “We’re always working on new ways to help make sure people’s time on Facebook is time well spent.”

The feature could help Facebook users stay mindful of how long they’re staring at the social network. This self-policing could be important since both iOS and Android are launching their own screen time monitoring dashboards that reveal which apps are dominating your attention and can alert you or lock you out of apps when you hit your time limit. When Apple demoed the feature at WWDC, it used Facebook as an example of an app you might use too much.

Images of Facebook’s digital wellbeing tool come courtesy of our favorite tipster and app investigator Jane Manchun Wong. She previously helped TechCrunch scoop the development of features like Facebook Avatars, Twitter encrypted DMs and Instagram Usage Insights — a Time Well Spent feature that looks very similar to this one on Facebook.

Our report on Instagram Usage Insights led the sub-company’s CEO Kevin Systrom to confirm the upcoming feature, saying “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Facebook has already made changes to its News Feed algorithm designed to reduce the presence of low-quality but eye-catching viral videos. That led to Facebook’s first-ever usage decline in North America in Q4 2017, with a loss of 700,000 daily active users in the region. Zuckerberg said on an earnings call that this change “reduced time spent on Facebook by roughly 50 million hours every day.”

Zuckerberg has been adamant that all time spent on Facebook isn’t bad. Instead, as we argued in our piece “The difference between good and bad Facebooking,” its asocial, zombie-like passive browsing and video watching that’s harmful to people’s wellbeing, while active sharing, commenting and chatting can make users feel more connected and supported.

But that distinction isn’t visible in this prototype of the “Your Time on Facebook” tool, which appears to treat all time spent the same. If Facebook was able to measure our active versus passive time on its app and impress the health difference, it could start to encourage us to either put down the app or use it to communicate directly with friends when we find ourselves mindlessly scrolling the feed or enviously viewing people’s photos.

‘Gaming disorder’ is officially recognized by the World Health Organization

Honestly, “gaming disorder” sounds like a phrase tossed around by irritated parents and significant others. After much back and forth, however, the term was just granted validity, as the World Health Organization opted to include it in the latest edition of its Internal Classification of Diseases.

The volume, out this week, diagnoses the newly minted disorder with three key telltale signs:

  1. Impaired control over gaming (e.g. onset, frequency, intensity, duration, termination, context)
  2. Increasing priority given to gaming to the extent that gaming takes precedence over other life interests and daily activities
  3. Continuation or escalation of gaming despite the occurrence of negative consequences

I can hear the collective sound of many of my friends gulping at the sound of eerily familiar symptoms. Of course, the disorder has been criticized from a number of corners, including health professionals who have written it off as being overly broad and subjective. And, of course, the potential impact greatly differs from person to person and game to game.

The effects as specified above share common ground with other similar addictive activities defined by the WHO, including gambling disorder:

“Disorders due to addictive behaviours are recognizable and clinically significant syndromes associated with distress or interference with personal functions that develop as a result of repetitive rewarding behaviours other than the use of dependence-producing substances,” writes the WHO. “Disorders due to addictive behaviors include gambling disorder and gaming disorder, which may involve both online and offline behaviour.”

In spite of what may appear to be universal symptoms, however, the organization is quick to note that the prevalence of gaming disorder, as defined by the WHO, is actually “very low.” WHO member Dr. Vladimir Poznyak tells CNN, “Millions of gamers around the world, even when it comes to the intense gaming, would never qualify as people suffering from gaming disorder.”

Bet money on yourself with Proveit, the 1-vs-1 trivia app

Pick a category, wager a few dollars and double your money in 60 seconds if you’re smarter and faster than your opponent. Proveit offers a fresh take on trivia and game show apps by letting you win or lose cash on quick 10-question, multiple choice quizzes. Sick of waiting to battle a million people on HQ for a chance at a fraction of the jackpot? Play one-on-one anytime you want or enter into scheduled tournaments with $1,000 or more in prize money, while Proveit takes around 10 percent to 15 percent of the stakes.

“I’d play Jeopardy all the time with my family and wondered ‘why can’t I do this for money?’ ” says co-founder Prem Thomas.

Remarkably, it’s all legal. The Proveit team spent two years getting approved as “skill-based gaming” that exempts it from some laws that have hindered fantasy sports betting apps. And for those at risk of addiction, Proveit offers players and their loved ones a way to cut them off.

The scrappy Florida-based startup has raised $2.3 million so far. With fun games and a snackable format, Proveit lets you enjoy the thrill of betting at a moment’s notice. That could make it a favorite amongst players and investors in a world of mobile games without consequences.

“I could spend $50 for a three-hour experience in a movie theater, or I could spend $2 to enter a Proveit Movies tournament that gives me the opportunity to compete for several thousand dollars in prize money,” says co-founder Nathan Lehoux. “That could pay for a lot of movies tickets!”

Proving it as outsiders

St. Petersburg, Fla. isn’t exactly known as an innovation hub. But outside Tampa Bay, far from the distractions, copycatting and astronomical rent of Silicon Valley, the founders of Proveit built something different. “What if people could play trivia for money just like fantasy sports?” Thomas asked his friend Lehoux.

That’s the same pitch that got me interested when Lehoux tracked me down at TechCrunch’s SXSW party earlier this year. Lehoux is a jolly, outgoing fella who became interested in startups while managing some angel investments for a family office. Thomas had worked in banking and health before starting a yoga-inspired sandals brand. Neither had computer science backgrounds, and they’d raised just a $300,000 seed round from childhood friend Hilt Tatum who’d co-founded beleaguered real money gambling site Absolute Poker.

Yet when he Lehoux thrust the Proveit app into my hand, even on a clogged mobile network at SXSW, it ran smoothly and I immediately felt the adrenaline rush of matching wits for money. They’d initially outsourced development to an NYC firm that burned much of their initial $300,000 seed funding without delivering. Luckily, the Ukrainian they’d hired to help review that shop’s code helped them spin up a whole team there that built an impressive v1 of Proveit.

Meanwhile, the founders worked with a gaming lawyer to secure approvals in 33 states including California, New York, and Texas. “This is a highly regulated and highly controversial space due to all the negative press that fantasy sports drummed up,” says Lehoux. “We talked to 100 banks and processors before finding one who’d work with us.”

Proveit founders (from left): Nathan Lehoux, Prem Thomas

Proveit was finally legal for the three-fourths of the U.S. population, and had a regulatory moat to deter competitors. To raise launch capital, the duo tapped their Florida connections to find John Morgan, a high-profile lawyer and medical marijuana advocate, who footed a $2 million angel round. A team of grad students in Tampa Bay was assembled to concoct the trivia questions, while a third-party AI company assists with weeding out fraud.

Proveit launched early this year, but beyond a SXSW promotion, it has stayed under the radar as it tinkers with tournaments and retention tactics. The app has now reached 80,000 registered users, 6,000 multi-deposit hardcore loyalists and has paid out $750,000 total. But watching HQ trivia climb to more than 1 million players per game has proven a bigger market for Proveit.

Quiz for cash

“We’re actually fans of HQ. We play. We think they’ve revolutionized the game show,” Lehoux tells me. “What we want to do is provide something very different. With HQ, you can’t pick your category. You can’t pick the time you want to play. We want to offer a much more customized experience.”

To play Proveit, you download its iOS-only app and fund your account with a buy-in of $20 to $100, earning more bonus cash with bigger packages (no minors allowed). Then you play a practice round to get the hang of it — something HQ sorely lacks. Once you’re ready, you pick from a list of game categories, each with a fixed wager of about $1 to $5 to play (choose your own bet is in the works). You can test your knowledge of superheroes, the ’90s, quotes, current events, rock ‘n roll, Seinfeld, tech and a rotating selection of other topics.

In each Proveit game you get 10 questions, 1 at a time, with up to 15 seconds to answer each. Most games are head-to-head, with options to be matched with a stranger, or a friend via phone contacts. You score more for quick answers, discouraging cheating via Google, and get penalized for errors. At the end, your score is tallied up and compared to your opponent, with the winner keeping both player’s wagers minus Proveit’s cut. In a minute or so, you could lose $3 or win $5.28. Afterwards you can demand a rematch, go double-or-nothing, head back to the category list or cash out if you have more than $20.

The speed element creates intense, white-knuckled urgency. You can get every question right and still lose if your opponent is faster. So instead of second-guessing until locking in your choice just before the buzzer like on HQ, where one error knocks you out, you race to convert your instincts into answers on Proveit. The near instant gratification of a win or humiliation of a defeat nudge you to play again rather than having to wait for tomorrow’s game.

Proveit will have to compete with free apps like Trivia Crack, prize games like student loan repayer Givling and virtual currency-based Fleetwit, and the juggernaut HQ.

“The large tournaments are the big draw,” Lehoux believes. Instead of playing one-on-one, you can register and ante up for a scheduled tournament where you compete in a single round against hundreds of players for a grand prize. Right now, the players with the top 20 percent of scores win at least their entry fee back or more, with a few geniuses collecting the cash of the rest of the losers.

Just like how DraftKings and FanDuel built their user base with big jackpot tournaments, Proveit hopes to do the same… then get people playing little one-on-one games in-between as they wait for their coffee or commute home from work.

Gaming or gambling?

Thankfully, Proveit understands just how addictive it can be. The startup offers a “self-exclusion” option. “If you feel that you need to take greater control of your life as it relates to skill-gaming,” users can email it to say they shouldn’t play any more, and it will freeze or close their account. Family members and others can also request you be frozen if you share a bank account, they’re your dependant, they’re obligated for your debts or you owe unpaid child support.

“We want Proveit to be a fun, intelligent entertainment option for our players. It’s impossible for us to know who might have an issue with real-money gaming,” Lehoux tells me. “Every responsible real-money game provides this type of option for its users.

That isn’t necessarily enough to thwart addiction, because dopamine can turn people into dopes. Just because the outcome is determined by your answers rather than someone else’s touchdown pass doesn’t change that.

Skill-based betting from home could be much more ripe for abuse than having to drag yourself to a casino, while giving people an excuse that they’re not gambling on chance. Zynga’s titles like Farmville have been turning people into micro-transaction zombies for a decade, and you can’t even win money from them. Simultaneously, sharks could study up on a category and let Proveit’s random matching deliver them willing rookies to strip cash from all day. “This is actually one of the few forms of entertainment that rewards players financially for using their brain,” Lehoux defends.

With so much content to consume and consequence-free games to play, there’s an edgy appeal to the danger of Proveit and apps like it. Its moral stance hinges on how much autonomy you think adults should be afforded. From Coca-Cola to Harley-Davidson to Caesar’s Palace, society has allowed businesses to profit off questionably safe products that some enjoy.

For better and worse, Proveit is one of the most exciting mobile games I’ve ever played.

First look at Instagram’s self-policing Time Well Spent tool

Are you Overgramming? Instagram is stepping up to help you manage overuse rather than leaving it to iOS and Android’s new screen time dashboards. Last month after TechCrunch first reported Instagram was prototyping a Usage Insights feature, the Facebook sub-company’s CEO Kevin System confirmed its forthcoming launch.

Tweeting our article, Systrom wrote “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Now we have our first look at the tool via Jane Manchun Wong, who’s recently become one of TechCrunch’s favorite sources thanks to her skills at digging new features out of apps’ Android APK code. Though Usage Insights might change before an official launch, these screenshots give us an idea of what Instagram will include. Instagram declined to comment, saying it didn’t have any more to share about the feature at this time.

This unlaunched version of Instagram’s Usage Insights tool offers users a daily tally of their minutes spent on the app. They’ll be able to set a time spent daily limit, and get a reminder once they exceed that. There’s also a shortcut to manage Instagram’s notifications so the app is less interruptive. Instagram has been spotted testing a new hamburger button that opens a slide-out navigation menu on the profile. That might be where the link for Usage Insights shows up, judging by this screenshot.

Instagram doesn’t appear to be going so far as to lock you out of the app after your limit, or fading it to grayscale which might annoy advertisers and businesses. But offering a handy way to monitor your usage that isn’t buried in your operating system’s settings could make users more mindful.

Instagram has an opportunity to be a role model here, especially if it gives its Usage Insights feature sharper teeth. For example,  rather than a single notification when you hit your daily limit, it could remind you every 15 minutes after, or create some persistent visual flag so you know you’ve broken your self-imposed rule.

Instagram has already started to push users towards healthier behavior with a “You’re all caught up” notice when you’ve seen everything in your feed and should stop scrolling.

I expect more apps to attempt to self-police with tools like these rather than leaving themselves at the mercy of iOS’s Screen Time and Android’s Digital Wellbeing features that offer more drastic ways to enforce your own good intentions.

Both let you see overall usage of your phone and stats about individual apps. iOS lets you easily dismiss alerts about hitting your daily limit in an app but delivers a weekly usage report (ironically via notification), while Android will gray out an app’s icon and force you to go to your settings to unlock an app once you exceed your limit.

For Android users especially, Instagram wants to avoid looking like such a time sink that you put one of those hard limits on your use. In that sense, self-policing shows both empathy for its users’ mental health, but is also a self-preservation strategy. With Instagram slated to launch a long-form video hub that could drive even longer session times this week, Usage Insights could be seen as either hypocritical or more necessary than ever.

New time management tools coming to iOS (left) and Android (right). Images via The VergeInstagram is one of the world’s most beloved apps, but also one of the most easily abused. From envy spiraling as you watch the highlights of your friends’ lives to body image issues propelled by its endless legions of models, there are plenty of ways to make yourself feel bad scrolling the Insta feed. And since there’s so little text, no links, and few calls for participation, it’s easy to zombie-browse in the passive way research shows is most dangerous.

We’re in a crisis of attention. Mobile app business models often rely on maximizing our time spent to maximize their ad or in-app purchase revenue. But carrying the bottomless temptation of the Internet in our pockets threatens to leave us distracted, less educated, and depressed. We’ve evolved to crave dopamine hits from blinking lights and novel information, but never had such an endless supply.

There’s value to connecting with friends by watching their days unfold through Instagram and other apps. But tech giants are thankfully starting to be held responsible for helping us balance that with living our own lives.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

Elizabeth Holmes reportedly steps down at Theranos after criminal indictment

Elizabeth Holmes has left her role as CEO of Theranos and has been charged with wire fraud, CNBC and others report. The company’s former president, Ramesh “Sunny” Balwani, was also indicted today by a grand jury.

These criminal charges are separate from the civil ones filed in March by the SEC and already settled. There are 11 charges; two are conspiracy to commit wire fraud (against investors, and against doctors and patients) and the remaining nine are actual wire fraud, with amounts ranging from the cost of a lab test to $100 million.

Theranos’s general counsel, David Taylor, has been appointed CEO. What duty the position actually entails in the crumbling enterprise is unclear. Holmes, meanwhile, remains chairman of the board.

The FBI Special Agent in Charge of the case against Theranos, John Bennett, said the company engaged in “a corporate conspiracy to defraud financial investors,” and “misled doctors and patients about the reliability of medical tests that endangered health and lives.”

This story is developing. I’ve asked Theranos for comment and will update if I hear back; indeed I’m not even sure anyone is there to respond.

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place.

The audit (full report here) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing.

“This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.”

Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document raising questions over the broad scope of the data transfer; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined by the two main parties involved in the project.

In November 2016 the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps.

They also went on to roll out the Streams app for use on patients in multiple NHS hospitals — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement.

And just over a year ago the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind.

The audit of the Streams project was a requirement of the ICO.

Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action.

In a statement on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.”

In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.”

So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

And Wood’s statement pointedly reiterates that the ICO’s investigation “found a number of shortcomings in the way patient records were shared for this trial”.

“[P]art of the undertaking committed Royal Free to commission a third party audit. They have now done this and shared the results with the ICO. What’s important now is that they use the findings to address the compliance issues addressed in the audit swiftly and robustly. We’ll be continuing to liaise with them in the coming months to ensure this is happening,” he adds.

“It’s important that other NHS Trusts considering using similar new technologies pay regard to the recommendations we gave to Royal Free, and ensure data protection risks are fully addressed using a Data Protection Impact Assessment before deployment.”

While the report is something of a frustration, given the glaring historical omissions, it does raise some points of interest — including suggesting that the Royal Free should probably scrap a Memorandum of Understanding it also inked with DeepMind, in which the pair set out their ambition to apply AI to NHS data.

This is recommended because the pair have apparently abandoned their AI research plans.

On this Linklaters writes: “DeepMind has informed us that they have abandoned their potential research project into the use of AI to develop better algorithms, and their processing is limited to execution of the NHS AKI algorithm… In addition, the majority of the provisions in the Memorandum of Understanding are non-binding. The limited provisions that are binding are superseded by the Services Agreement and the Information Processing Agreement discussed above, hence we think the Memorandum of Understanding has very limited relevance to Streams. We recommend that the Royal Free considers if the Memorandum of Understanding continues to be relevant to its relationship with DeepMind and, if it is not relevant, terminates that agreement.”

In another section, discussing the NHS algorithm that underpins the Streams app, the law firm also points out that DeepMind’s role in the project is little more than helping provide a glorified app wrapper (on the app design front the project also utilized UK app studio, ustwo, so DeepMind can’t claim app design credit either).

“Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking. It does not, by any measure, involve artificial intelligence or machine learning or other advanced technology. The benefits of the Streams App instead come from a very well-designed and user-friendly interface, backed up by solid infrastructure and data management that provides AKI alerts and contextual clinical information in a reliable, timely and secure manner,” Linklaters writes.

What DeepMind did bring to the project, and to its other NHS collaborations, is money and resources — providing its development resources free for the NHS at the point of use, and stating (when asked about its business model) that it would determine how much to charge the NHS for these app ‘innovations’ later.

Yet the commercial services the tech giant is providing to what are public sector organizations do not appear to have been put out to open tender.

Also notably excluded in the Linklaters’ audit: Any scrutiny of the project vis-a-vis competition law, public procurement law compliance with procurement rules, and any concerns relating to possible anticompetitive behavior.

The report does highlight one potentially problematic data retention issue for the current deployment of Streams, saying there is “currently no retention period for patient information on Streams” — meaning there is no process for deleting a patient’s medical history once it reaches a certain age.

“This means the information on Streams currently dates back eight years,” it notes, suggesting the Royal Free should probably set an upper age limit on the age of information contained in the system.

While Linklaters largely glosses over the chequered origins of the Streams project, the law firm does make a point of agreeing with the ICO that the original privacy impact assessment for the project “should have been completed in a more timely manner”.

It also describes it as “relatively thin given the scale of the project”.

Giving its response to the audit, health data privacy advocacy group MedConfidential — an early critic of the DeepMind data-sharing arrangement — is roundly unimpressed, writing: “The biggest question raised by the Information Commissioner and the National Data Guardian appears to be missing — instead, the report excludes a “historical review of issues arising prior to the date of our appointment”.

“The report claims the ‘vital interests’ (i.e. remaining alive) of patients is justification to protect against an “event [that] might only occur in the future or not occur at all”… The only ‘vital interest’ protected here is Google’s, and its desire to hoard medical records it was told were unlawfully collected. The vital interests of a hypothetical patient are not vital interests of an actual data subject (and the GDPR tests are demonstrably unmet).

“The ICO and NDG asked the Royal Free to justify the collection of 1.6 million patient records, and this legal opinion explicitly provides no answer to that question.”

AI detects movement through walls using wireless signals

You don't need exotic radar, infrared or elaborate mesh networks to spot people through walls — all you need are some easily detectable wireless signals and a dash of AI. Researchers at MIT CSAIL have developed a system (RF-Pose) that uses a neural…