Official near-earth object plan will look into nuking asteroids and other ‘planetary defense missions’

Space is a big place, and mostly empty — but there’s no shortage of objects which, should they float our direction, could end life as we know it. A new national plan for detecting and handling such objects was proposed today, and it includes the possibility of nuclear strikes on the incoming asteroids and other “planetary defense missions.”

The plan, revealed and discussed this morning, is far from a joke — it’s just that the scales these threats operate at necessarily elevates the discourse to Hollywood levels.

It’s not so much “let’s do this” as “let’s figure out what we can do.” As such it has five major goals.

First, improve our ability to detect and track near-earth objects, or NEOs. We’ve been doing it for years, and projects like NEOWise have captured an incredible amount of these objects, ranging in size from the kind that will safely burn up in orbit, to those that might cause serious damage (like the Chelyabinsk one), to proper planet-killers.

But we often hear about NEOs being detected for the first time on near-collision courses just days before approach, or even afterwards. So the report recommends looking at how existing and new programs can be utilized to better catch these objects before they become a problem.

Second, improve our knowledge of what these objects can and have done by studying and modeling them. Not just so that we know more in general, but so that in the case of a serious incoming object we know that our predictions are sound.

Third, and this is where things go a little off the rails, we need to assess and develop NEO “deflection and disruption” technologies. After all, if a planet-killer is coming our direction, we should be able to do something, right? And perhaps it shouldn’t be the very first time we’ve tried it.

The list of proposed methods sounds like it was sourced from science fiction:

This assessment should include the most mature in-space concepts — kinetic impactors, nuclear devices, and gravity tractors for deflection, and nuclear devices for disruption — as well as less mature NEO impact prevention methods.

I wasn’t aware that space nukes and gravity tractors were our most mature concepts for this kind of thing! But again, the fact is that a city-sized object approaching at a significant fraction of the speed of light is an outlandish problem that demands outlandish solutions.

And I don’t know about you, but I’d rather we tried a space nuke once or twice on a dry run rather than do it live while Armageddon looms.

At first these assessments will be purely theoretical, of course. But in the medium and long term NASA and others are tasked with designing actual “planetary defense missions”:

This action includes preliminary designs for a gravity tractor NEO deflection mission campaign, and for a kinetic impactor mission campaign in which the spacecraft is capable of either functioning as a kinetic impactor or delivering a nuclear explosive device. For the latter case, the spacecraft would contain all systems necessary to carry and safely employ a nuclear explosive device, but would carry a mass simulator with appropriate interfaces in place of an actual nuclear device. Designs should include reconnaissance spacecraft and methods to measure the achieved deflection.

Actual flight tests “would not incorporate an actual nuclear device, or involve any nuclear explosive testing.” Not yet, anyway. It’d just be a dry run, which serves its own purposes: “Thorough flight testing of a deflection/disruption system prior to an actual planetary defense mission would substantially decrease the risk of mission failure.”

Fourth the report says that we need to collaborate on the world stage, since of course NEO strikes don’t exactly discriminate by country. So in the first place we need to strengthen our existing partnerships with countries sharing NEO-related data or studies along these lines. We should all be looking into how a potential impact could affect our country specifically, of course, since we’re the ones here — but that data should be shared and analyzed globally.

Last, “Strengthen and Routinely Exercise NEO Impact Emergency Procedures and Action Protocols.”

In other words, asteroid drills.

But it isn’t just stuff like “here’s where Boulder residents should evacuate to in case of impact.” As the document points out, NEO impacts are a unique sort of emergency event.

Response and mitigation actions cannot be made routine to the same degree that they are for other natural disasters such as hurricanes. Rather, establishing and exercising thresholds and protocols will aid agencies in preparing options and recommending courses of action.

The report recommends exploring some realistic scenarios based on objects or situations we know to exist and seeing how they might play out — who will need to get involved? How will data be shared? Who is in charge of coordinating the agencies if it’s a domestic impact versus a foreign one? (See Shin Godzilla for a surprisingly good example of bureaucratic paralysis in the face of an unknown threat.)

It’s strange to think that we’re really contemplating these issues, but it’s a lot better than sitting on our hands waiting for the Big One to hit. You can read the rest of the recommendations here.

Truepic raises $8M to expose Deepfakes, verify photos for Reddit

How can you be sure an image wasn’t Photoshopped? Make sure it was shot with Truepic. This startup makes a camera feature that shoots photos and adds a watermark URL leading to a copy of the image it saves, so viewers can compare them to ensure the version they’re seeing hasn’t been altered.

Now Truepic’s technology is getting its most important deployment yet as the way Reddit will verify that Ask Me Anything Q&As are being conducted live by the actual person advertised — oftentimes a celebrity.

But beyond its utility for verifying AMAs, dating profiles and peer-to-peer e-commerce listings, Truepic is tackling its biggest challenge yet: identifying artificial intelligence-generated Deepfakes. These are where AI convincingly replaces the face of a person in a video with someone else’s. Right now the technology is being used to create fake pornography combining an adult film star’s body with an innocent celebrity’s face without their consent. But the big concern is that it could be used to impersonate politicians and make them appear to say or do things they haven’t.

The need for ways to weed out Deepfakes has attracted a new $8 million round for Truepic. The cash comes from untraditional startup investors, including Dowling Capital Partners, former Thomson Financial (which become Reuters) CEO Jeffrey Parker, Harvard Business school professor William Sahlman and more. The Series A brings Truepic to $10.5 million in funding.

“We started Truepic long before manipulated images impacted democratic elections across the globe, digital evidence of atrocities and human rights abuses were regularly undermined, or online identities were fabricated to advance political agendas — but now we fully recognize its impact on society,” says Truepic founder and COO Craig Stack. “The world needs the Truepic technology to help right the wrongs that have been created by the abuse of digital imagery.”

Here’s how Truepic works:

  1. Snap a photo in Truepic’s iOS and Android app, or an app that’s paid to embed its SDK in their own app
  2. Truepic verifies the image hasn’t been altered already, and watermarks it with a time stamp, geocode, URL and other metadata
  3. Truepic’s secure servers store a version of the photo, assigned with a six-digit code and its URL, plus a spot on an immutable blockchain
  4. Users can post their Truepic in apps to prove they’re not catfishing someone on a dating site, selling something broken on an e-commerce site, or elsewhere
  5. Viewers can visit the URL watermarked onto the photo to compare it to the vault-saved version to ensure it hasn’t been modified after the fact

For example, Reddit’s own Wiki recommends that AMA creators use the Truepic app to snap a photo of them holding a handwritten sign with their name and the date on it. “Truepic’s technology allows us to quickly and safely verify the identity and claims for some of our most eccentric guests,” says Reddit AMA moderator and Lynch LLP intellectual property attorney Brian Lynch. “Truepic is a perfect tool for the ever-evolving geography of privacy laws and social constructs across the internet.”

The abuses of image manipulation are evolving, too. Deepfakes could embarrass celebrities… or start a war. “We will be investing in offline image and video analysis and already have identified some subtle forensic techniques we can use to detect forgeries like deepfakes,” Truepic CEO Jeff McGregor tells me. “In particular, one can analyze hair, ears, reflectivity of eyes and other details that are nearly impossible to render true-to-life across the thousands of frames of a typical video. Identifying even a few frames that are fake is enough to declare a video fake.”

This will always be a cat and mouse game, but from newsrooms to video platforms, Truepic’s technology could keep content creators honest. The startup has also begun partnering with NGOs like the Syrian American Medical Society to help it deliver verified documentation of atrocities in the country’s conflict zone. The Human Rights Foundation also trained humanitarian leaders on how to use Truepic at the 2018 Freedom Forum in Oslo.

Throwing shade at Facebook, McGregor concludes that “The internet has quickly become a dumpster fire of disinformation. Fraudsters have taken full advantage of unsuspecting consumers and social platforms facilitate the swift spread of false narratives, leaving over 3.2 billion people on the internet to make self-determinations over what’s trustworthy vs. fake online… we intend to fix that by bringing a layer of trust back to the internet.”

Snapchat Lenses bring coral reefs to your neighborhood

How do you make nature exciting to a generation growing up with Snapchat and Instagram? The California Academy of Sciences has an idea: bring the nature to the apps that generation is using. It just trotted out a series of augmented reality Snapcha…

Speedy AI image analysis could help doctors during surgery

Right now, comparing 3D medical scans is a pain — it can take two hours or more to see what's changed. And that spells trouble for surgeons, who may have to bring patients back to the operating room if a tumor removal wasn't a complete success. Th…

Synthetic ‘blubber’ could triple divers’ survival time in icy water

Even the hardiest diver can't last longer than an hour in cold water using a modern wetsuit, and that's no good if you're a special ops soldier or otherwise need to stay under the sea for hours at a time. MIT has a simple solution: imitate the blubb…

What’s under those clothes? This system tracks body shapes in real time

With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses.

The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer.

Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum will interfere with the computer getting a good idea of how your body is positioned.

The multi-institutional project (PDF), due to be presented at CVPR in Salt Lake City, combines depth data with smart assumptions about how a body is shaped and what it can do. The result is a sort of X-ray vision, revealing the shape and position of a person’s body underneath their clothes, that works in real time even during quick movements like dancing.

The paper builds on two previous methods, DynamicFusion and BodyFusion. The first uses single-camera depth data to estimate a body’s pose, but doesn’t work well with quick movements or occlusion; the second uses a skeleton to estimate pose but similarly loses track during fast motion. The researchers combined the two approaches into “DoubleFusion,” essentially creating a plausible skeleton from the depth data and then sort of shrink-wrapping it with skin at an appropriate distance from the core.

As you can see above, depth data from the camera is combined with some basic reference imagery of the person to produce both a skeleton and track the joints and terminations of the body. On the right there, you see the results of just DynamicFusion (b), just BodyFusion (c) and the combined method (d).

The results are much better than either method alone, seemingly producing excellent body models from a variety of poses and outfits:

Hoodies, headphones, baggy clothes, nothing gets in the way of the all-seeing eye of DoubleFusion.

One shortcoming, however, is that it tends to overestimate a person’s body size if they’re wearing a lot of clothes — there’s no easy way for it to tell whether someone is broad or they are just wearing a chunky sweater. And it doesn’t work well when the person interacts with a separate object, like a table or game controller — it would likely try to interpret those as weird extensions of limbs. Handling these exceptions is planned for future work.

The paper’s first author is Tao Yu of Tsinghua University in China, but researchers from Beihang University, Google, USC, and the Max Planck Institute were also involved.

“We believe the robustness and accuracy of our approach will enable many applications, especially in AR/VR, gaming, entertainment and even virtual try-on as we also reconstruct the underlying body shape,” write the authors in the paper’s conclusion. “For the first time, with DoubleFusion, users can easily digitize themselves.”

There’s no use denying that there are lots of interesting applications of this technology. But there’s also no use denying that this technology is basically X-ray Spex.

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t.

But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically.

Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem.

It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism.

From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method.

In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image.

Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.)

What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on.

The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks.

In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though.

It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems.

You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.

Elizabeth Holmes reportedly steps down at Theranos after criminal indictment

Elizabeth Holmes has left her role as CEO of Theranos and has been charged with wire fraud, CNBC and others report. The company’s former president, Ramesh “Sunny” Balwani, was also indicted today by a grand jury.

These criminal charges are separate from the civil ones filed in March by the SEC and already settled. There are 11 charges; two are conspiracy to commit wire fraud (against investors, and against doctors and patients) and the remaining nine are actual wire fraud, with amounts ranging from the cost of a lab test to $100 million.

Theranos’s general counsel, David Taylor, has been appointed CEO. What duty the position actually entails in the crumbling enterprise is unclear. Holmes, meanwhile, remains chairman of the board.

The FBI Special Agent in Charge of the case against Theranos, John Bennett, said the company engaged in “a corporate conspiracy to defraud financial investors,” and “misled doctors and patients about the reliability of medical tests that endangered health and lives.”

This story is developing. I’ve asked Theranos for comment and will update if I hear back; indeed I’m not even sure anyone is there to respond.