Vice goes to the big screen with Motherboard science documentary

Vice-owned outlet Motherboard's documentary The Most Unknown, about "the biggest questions in science," will debut on Netflix after its theatrical run. The publication sent nine scientists around the world to get answers to big topics like the defini…

Technique to beam HD video with 99 percent less power could sharpen the eyes of smart homes

Everyone seems to be insisting on installing cameras all over their homes these days, which seems incongruous with the ongoing privacy crisis — but that’s a post for another time. Today, we’re talking about enabling those cameras to send high-definition video signals wirelessly without killing their little batteries. A new technique makes beaming video out more than 99 percent more efficient, possibly making batteries unnecessary altogether.

Cameras found in smart homes or wearables need to transmit HD video, but it takes a lot of power to process that video and then transmit the encoded data over Wi-Fi. Small devices leave little room for batteries, and they’ll have to be recharged frequently if they’re constantly streaming. Who’s got time for that?

The idea behind this new system, created by a University of Washington team led by prolific researcher Shyam Gollakota, isn’t fundamentally different from some others that are out there right now. Devices with low data rates, like a digital thermometer or motion sensor, can something called backscatter to send a low-power signal consisting of a couple of bytes.

Backscatter is a way of sending a signal that requires very little power, because what’s actually transmitting the power is not the device that’s transmitting the data. A signal is sent out from one source, say a router or phone, and another antenna essentially reflects that signal, but modifies it. By having it blink on and off you could indicate 1s and 0s, for instance.

UW’s system attaches the camera’s output directly to the output of the antenna, so the brightness of a pixel directly correlates to the length of the signal reflected. A short pulse means a dark pixel, a longer one is lighter, and the longest length indicates white.

Some clever manipulation of the video data by the team reduced the number of pulses necessary to send a full video frame, from sharing some data between pixels to using a “zigzag” scan (left to right, then right to left) pattern. To get color, each pixel needs to have its color channels sent in succession, but this too can be optimized.

Assembly and rendering of the video is accomplished on the receiving end, for example on a phone or monitor, where power is more plentiful.

In the end, a full-color HD signal at 60FPS can be sent with less than a watt of power, and a more modest but still very useful signal — say, 720p at 10FPS — can be sent for under 80 microwatts. That’s a huge reduction in power draw, mainly achieved by eliminating the entire analog to digital converter and on-chip compression. At those levels, you can essentially pull all the power you need straight out of the air.

They put together a demonstration device with off-the-shelf components, though without custom chips it won’t reach those

A frame sent during one of the tests. This transmission was going at about 10FPS.

microwatt power levels; still, the technique works as described. The prototype helped them determine what type of sensor and chip package would be necessary in a dedicated device.

Of course, it would be a bad idea to just blast video frames into the ether without any compression; luckily, the way the data is coded and transmitted can easily be modified to be meaningless to an observer. Essentially you’d just add an interfering signal known to both devices before transmission, and the receiver can subtract it.

Video is the first application the team thought of, but there’s no reason their technique for efficient, quick backscatter transmission couldn’t be used for non-video data.

The tech is already licensed to Jeeva Wireless, a startup founded by UW researchers (including Gollakota) a while back that’s already working on commercializing another low-power wireless device. You can read the details about the new system in their paper, presented last week at the Symposium on Networked Systems Design and Implementation.

Genetics testing startup Prenetics buys UK’s DNAFit to move into consumer services

Prenetics, a Hong Kong-based startup that offers genetic testing services for patients, is expanding outside of Asia and into the consumer space after it acquired London-based company DNAFit.

The deal — which a source told TechCrunch is worth $10 million — not only sees Prenetics enter new geographies, but also expand the scope of its services. Prenetics, which includes Chinese e-commerce giant Alibaba among its backers, works directly with insurance firms and physicians who use its testing service for their customers and patients, but DNAFit goes straight to consumers themselves.

Five-year-old DNAFit sells a test that profiles an individual’s DNA to help them to figure out the fitness and nutrition setup that is best suited to them. DNAFit’s kits — which cost up to £249 ($350) and take 10 days for results — are sold online and via employee packages.

The company said it has sold its product to around 100,000 people with companies including LinkedIn, Talk Talk and Channel 4 among its corporate clients. High-profile backers include Olympic gold medal-winning British athlete Greg Rutherford, who said the results helped him make “clear, informed decisions” on his training regime.

Prenetics has been considering global expansion options for some time, and this acquisition gets its foot in the door in new markets while also tackling the consumer health market, too.

“We definitely plan on investing and growing our reach in Europe for the DNAFit business. In addition, Prenetics International will be focused on a B2B with insurers and for corporates,” Prenetics CEO Danny Yeung told TechCrunch via email.

“At the same time, DNAFit is a partner for [fitness company] Helix in the U.S., thus we plan on investing further on customer acquisition and growing our reach in the U.S.,” Yeung added. “We are extremely excited at the potential to bring DNA testing to a global market, making an impact on the lives of many.”

In another tie-up primarily targeted at the U.S., an offer for 23andMe customers allows them to use their results and pay $79 for DNAFit.

The deal sees DNAFit CEO Avi Lasarow becomes CEO of Prenetics International, a newly formed business unit, with Yeung CEO of parent company Prenetics Group. DNAFit itself will continue to run under its existing brand, both companies confirmed.

This marks the first piece of acquisition for Prenetics, which last year closed a $40 million Series B funding round led by Beyond Ventures and Alibaba Hong Kong Entrepreneurs Fund. Yeung told us at the time that a portion of that capital would be reserved for meaningful acquisitions as the startup aims to go beyond its early focus on China, Hong Kong and Southeast Asia. At the time of that funding, which happened in October, Yeung said Prenetics had processed 200,000 DNA samples.

Prenetics started out as ‘Multigene’ in 2009 when it span out from Hong Kong’s City University. Yeung joined the firm as CEO in 2014, after leaving Groupon following its acquisition of his Hong Kong startup uBuyiBuy, and it has been in startup mode since then. Prenetics has raised over $52 million from investors which, aside from Alibaba, include 500 Startups, Venturra Capital and Chinese insurance giant Ping An.

SpaceX brings NASA’s TESS to space and successfully lands its Falcon 9 rocket

SpaceX has successfully deployed NASA’s new exoplanet-hunting telescope into high Earth orbit. From there, it will get a gravity assist from the moon and enter a wide orbit, beginning its mission. Meanwhile, back on the surface, the Falcon 9’s first stage landed successfully on the drone ship Of Course I Still Love You.

This is the eighth launch this year, and the 24th time SpaceX has landed a Falcon 9 first stage — that is, the part of the rocket that accelerates it out of the atmosphere. Although the plan is eventually to catch the falling fairing in a “giant catcher’s mitt,” as Elon Musk once described it, the boat-borne mitt is currently in the Pacific Ocean and this launch was over the Atlantic.

The rocket shortly after landing on Of Course I Still Love You. The ship’s feed cut out when the rocket landed.

This rocket, after being inspected and refurbished, of course, is planned to be reused for the next ISS resupply mission SpaceX is performing, in June. Soon this generation of Falcon 9s will be exhausted, though: Starting soon, SpaceX will be launching its fifth-generation of Falcon 9s (“Block 5”), which have a variety of improvements to improve their reusability past the two or three times previous ones could be used. The first such launch is planned for early next week — look for a separate post on that soon.

A second burn went nominally and TESS successfully deployed; it’s now up to NASA to adjust the orbit further so it gets the necessary lunar boost. That will take some time, but we can expect data from the satellite to roll in soon (that is, within a few weeks or months) after.

ReviveMed turns drug discovery into a big data problem and raises $1.5M to solve it

What if there’s a drug that already exists that could treat a disease with no known therapies, but we just haven’t made the connection? Finding that connection by exhaustively analyzing complex biomechanics within the body — with the help of machine learning, naturally — is the goal of ReviveMed, a new biotech startup out of MIT that just raised $1.5 million in seed funding.

Around the turn of the century, genomics was the big thing. Then, as the power to investigate complex biological processes improved, proteomics became the next frontier. We may have moved on again, this time to the yet more complex field of metabolomics, which is where ReviveMed comes in.

Leila Pirhaji, ReviveMed’s founder and CEO, began work on the topic during her time as a postgrad at MIT. The problem she and her colleagues saw was the immense complexity of interactions between proteins, which are encoded in DNA and RNA, and metabolites, a class of biomolecules with even greater variety. Hidden in these innumerable interactions somewhere are clues to how and why biological processes are going wrong, and perhaps how to address that.

“The interaction of proteins and metabolites tells us exactly what’s happening in the disease,” Pirhaji told me. “But there are over 40,000 metabolites in the human body. DNA and RNA are easy to measure, but metabolites have tremendous diversity in mass. Each one requires its own experiment to detect.”

As you can imagine, the time and money that would be involved in such an extensive battery of testing have made metabolomics difficult to study. But what Pirhaji and her collaborators at MIT decided was that it was similar enough to other “big noisy data set” problems that the nascent approach of machine learning could be effective.

“Instead of doing experiments,” Pirhaji said, “why don’t we use AI and our database?” ReviveMed, which she founded along with data scientist Demarcus Briers and biotech veteran Richard Howell, is the embodiment of that proposal.

Pharmaceutical companies and research organizations already have a mess of metabolites masses, known interactions, suspected but unproven effects, and disease states and outcomes. Plenty of experimentation is done, but the results are frustratingly vague owing to the inability to the inability to be sure about the metabolites themselves or what they’re doing. Most experimentation has resulted in partial understanding of a small proportion of known metabolites.

That data isn’t just a few drives’ worth of spreadsheets and charts, either. Not only does the data comprise drug-protein, protein-protein, protein-metabolite, and metabolite-disease interactions, but they’re including data that’s essentially never been analyzed: “We’re looking at metabolites that no one has looked at before.”

The information is sitting in an archive somewhere, gathering dust. “We actually have to go physically pick up the mass spectrometry files,” Pirhaji said. (“They’re huge,” she added.)

Once they got the data all in one place (Pirhaji described it as “a big hairball with millions of interactions,” in a presentation in March), they developed a model to evaluate and characterize everything in it, producing the kind of insights machine learning systems are known for.

The “hairball.”

The results were more than a little promising. In a trial run, they identified new disease mechanisms for Huntington’s, new therapeutic targets (i.e. biomolecules or processes that could be affected by drugs), and existing drugs that may affect those targets.

The secret sauce, or one ingredient anyway, is the ability to distinguish metabolites with similar masses (sugars or fats with different molecular configurations but the same number and type of atoms, for instance) and correlate those metabolites with both drug and protein effects and disease outcomes. The metabolome fills in the missing piece between disease and drug without any tests establishing it directly.

At that point the drug will, of course, require real-world testing. But although ReviveMed does do some verification on its own, this is when the company would hand back the results to its clients, pharmaceutical companies, which then take the drug and its new effect to market.

In effect, the business model is offering a low-cost, high-reward R&D as a service to pharma, which can hand over reams of data it has no particular use for, potentially resulting in practical applications for drugs that already have millions invested in their testing and manufacture. What wouldn’t Pfizer pay to determine that Robitussin also prevents Alzheimers? That knowledge is worth billions, and ReviveMed is offering a new, powerful way to check for such things with little in the way of new investment.

This is the kind of web of molecules and effects that the system sorts through.

ReviveMed, for its part, is being a bit more choosy than that — its focus is on untreatable diseases with a good chance that existing drugs affect them. The first target is fatty liver disease, which affects millions, causing great suffering and cost. And something like Huntington’s, in which genetic triggers and disease effects are known but not the intermediate mechanisms, is also a good candidate for which the company’s models can fill the gap.

The company isn’t reliant on Big Pharma for its data, though. The original training data was all public (though “very fragmented”) and it’s that on which the system is primarily based. “We have a patent on our process for getting this metabolome data and translating it into insights,” Pirhaji notes, although the work they did at MIT is available for anyone to access (it was published in Nature Methods, in case you were wondering).

But compared with genomics and proteomics, not much metabolomic data is public — so although ReviveMed can augment its database with data from clients, its researchers are also conducting hundreds of human tests on their own to improve the model.

The business model is a bit complicated as well — “It’s very case by case,” Pirhaji told me. A research hospital looking to collaborate and share data while sharing any results publicly or as shared intellectual property, for instance, would not be a situation where a lot of cash would change hands. But a top-5 pharma company — two of which ReviveMed already has dealings with — that wants to keep all the results for itself and has limitless coffers would pay a higher cost.

I’m oversimplifying, but you get the idea. In many cases, however, ReviveMed will aim to be a part of any intellectual property it contributes to. And of course the data provided by the clients goes into the model and improves it, which is its own form of payment. So you can see that negotiations might get complicated. But the company already has several revenue-generating pilots in place, so even at this early stage those complications are far from insurmountable.

Lastly there’s the matter of the seed round: $1.5 million, led by Rivas Capital along with TechU, Team Builder Ventures, and WorldQuant. This should allow them to hire the engineers and data scientists they need and expand in other practical ways. Placing well in a recent Google machine learning competition got them $200K worth of cloud computing credit, so that should keep them crunching for a while.

ReviveMed’s approach is a fundamentally modern one that wouldn’t be possible just a few years ago, such is the scale of the data involved. It may prove to be a powerful example of data-driven biotech as lucrative as it is beneficial. Even the early proof-of-concept and pilot work may provide help to millions or save lives — it’s not every day a company is founded that can say that.

Google’s latest do-it-yourself AI kits include everything you need

Google's AIY kits have been helpful for do-it-yourselfers who want to explore AI concepts like computer vision, but they weren't really meant for newcomers when you had to supply your own Raspberry Pi and other must-haves. It'll be much easier to ge…

Want to fool a computer vision system? Just tweak some colors

Research into machine learning and the interesting AI models created as a consequence are popular topics these days. But there’s a sort of shadow world of scientists working to undermine these systems — not to show they’re worthless but to shore up their weaknesses. A new paper demonstrates this by showing how vulnerable image recognition models are to the simplest color manipulations of the pictures they’re meant to identify.

It’s not some deep indictment of computer vision — techniques to “beat” image recognition systems might just as easily be characterized as situations in which they perform particularly poorly. Sometimes this is something surprisingly simple: rotating an image, for example, or adding a crazy sticker. Unless a system has been trained specifically on a given manipulation or has orders to check common variations like that, it’s pretty much just going to fail.

In this case it’s research from the University of Washington led by grad student Hossein Hosseini. Their “adversarial” imagery was similarly simple: switch up the colors.

Probably many of you have tried something similar to this when fiddling around in an image manipulation program: by changing the “hue” and “saturation” values on a picture, you can make someone have green skin, a banana appear blue and so on. That’s exactly what the researchers did: twiddled the knobs so a dog looked a bit yellow, a deer looked purplish, etc.

The original images are at left; color-shifted versions and the systems’ best guesses at right.

Critically, however, the “value” of the pixels, meaning how light or dark it is, wasn’t changed, meaning the images still look like what they are — just in weird colors.

But while a cat looks like a cat no matter if it’s grey or pink to us, one can’t really say the same for a deep neural network. The accuracy of the model they tested was reduced by 90 percent on sets of color-tweaked images that it would normally identify easily. Its best guesses are pretty random, as you can see in the figure at right. Changing the colors totally changes the system’s guess.

The team tested several models and they all broke down on the color-shifted set, so it wasn’t just a consequence of this specific system.

It’s not too hard to fix — in this case, all you really need to do is add some labeled, color-shifted images into the training data so the system is exposed to them beforehand. This addition brought success rates back up to reasonable (if still fairly poor) levels.

But the point isn’t that computer vision systems are fundamentally bad at color or something. It’s that there are lots of ways of subtly or not-so-subtly manipulating an image or video that will devastate its accuracy or subvert it.

“Deep networks are very good at learning (or better memorizing) the distribution of training data,” wrote Hosseini in an email to TechCrunch. “They, however, hardly generalize beyond that. So, even if models are trained with augmented data, it’s likely that we can come up with a new type of adversarial images that can fool the model.”

A model trained to catch color variations might still be vulnerable to attention-based adversarial images and vice versa. The way these systems are created and encoded right now simply isn’t robust enough to prevent such attacks. But by cataloguing them and devising improvements that protect against some but not all, we can advance the state of the art.

“I think we need to find a way for the model to learn the concepts, such as being invariant to color or rotation,” Hosseini suggested. “That can save the algorithm a lot of training data and is more similar to how humans learn.”

You can read the full pre-print paper on Arxiv (PDF).

NASA’s planet-hunting TESS telescope launches Monday aboard a SpaceX rocket

Some of the most exciting space news of the past few years has been about Earth-like exoplanets that could one day (or perhaps already do) support life. TESS, a space telescope set to launch Monday aboard a SpaceX Falcon 9 rocket, will scan the sky for exoplanets faster and better than any existing platforms, expanding our knowledge of the universe and perhaps finding a friendly neighborhood to move to.

The Transit Exoplanet Survey Satellite has been in the works for years and in a way could be considered a sort of direct successor to the Kepler, the incredibly fruitful mission that has located thousands of exoplanets over nearly a decade.

But if Kepler was a telephoto aimed at dim targets far in the distance, TESS is an ultra-wide-angle lens that will watch nearly the entire visible sky.

They both work on the same principle, which is really quite simple: when a planet (or anything else) passes between us and a star (a “transit”), the brightness of that star temporarily dims. By tracking how much dimmer and for how long over multiple transits, scientists can determine the size, speed, and other characteristics of the body that passed by.

It may seem like looking for a needle in a haystack, watching the sky hoping a planet will pass by at just the right moment. But when you think about the sheer number of stars in the sky — and by the way, planets outnumber them — it’s not so crazy. As evidence of this fact, in 2016 Kepler confirmed the presence of 1,284 new planets just in the tiny patch of sky it was looking at.

TESS will watch for the same thing with a much, much broader perspective.

Its camera array has four 16.4-megapixel imaging units, each covering a square of sky 24 degrees across, making for a tall “segment” of the sky like a long Tetris block. The satellite will spend full 13.7-day orbits observing a segment, then move on to the next one. There are 13 such segments in the sky’s Northern hemisphere and 13 in the southern; by the time TESS has focused on them all, it will have checked 85 percent of the visible sky.

The little yellow patches are Kepler’s various fields of view.

It will be focusing on the brightest stars in our neighborhood: less than 300 light-years away and 30 to 100 times as bright as the ones Kepler was looking at. The more light, the more data, and often the less noise — researchers will be able to tell more about stars that are observed, and if necessary dedicate other ground or space resources towards observing them.

“TESS is opening a door for a whole new kind of study,” said Stephen Rinehart, one of the TESS project scientists, in a NASA release. “We’re going to be able study individual planets and start talking about the differences between planets. The targets TESS finds are going to be fantastic subjects for research for decades to come. It’s the beginning of a new era of exoplanet research.”

TESS being checked pre-launch; engineers included for scale.

Of course, with such close and continuous scrutiny of hundreds of thousands of stars, other interesting behaviors may be observed and passed on to the right mission or observatory. Stars flaring or going supernova, bursts of interesting radiation, and other events could very well occur.

In fact, an overlapping area of observation above each of Earth’s poles will be seen for a whole year straight, increasing the likelihood of catching some rare phenomenon.

But the best part of all may be that many of the stars observed will be visible to the naked eye. As Rinehart puts it: “The cool thing about TESS is that one of these days I’ll be able to go out in the country with my daughter and point to a star and say ‘there’s a planet around that one.’”

The launch on Monday will take place at Space Launch Complex 40 in Cape Canaveral. Once it safely enters space, the craft will receive a timely gravitational assist from the moon, which will insert it into a highly eccentric orbit that brings it close to earth about every two weeks.

“The moon and the satellite are in a sort of dance,” said Joel Villasenor, an MIT researcher and instrument scientist for TESS. “The moon pulls the satellite on one side, and by the time TESS completes one orbit, the moon is on the other side tugging in the opposite direction. The overall effect is the moon’s pull is evened out, and it’s a very stable configuration over many years. Nobody’s done this before, and I suspect other programs will try to use this orbit later on.”

This unique orbit maximizes the sky that TESS can see, minimizes the effect of the moon’s pull, and regularly brings it close enough to send data home for a short period. So we can expect a sort of biweekly drip-drip of exoplanet news for quite some time.

SpaceX is the launch partner, and the Falcon 9 rocket on which it will ride into orbit has already been test fired. TESS is packaged up and ready to go, as you see at right. Currently the launch is planned for a 30-second window at 6:32 Florida time; if for some reason they miss that window, they’ll have to wait until the moon comes round again — a March 20 launch was already canceled.

You’ll be able to watch the launch live, of course; I’ll add a link as soon as it’s available.