The future of AI relies on a code of ethics

Facebook has recently come under intense scrutiny for sharing the data of millions of users without their knowledge. We’ve also learned that Facebook is using AI to predict users’ future behavior and selling that data to advertisers. Not surprisingly, Facebook’s business model and how it handles its users’ data has sparked a long-awaited conversation — and controversy — about data privacy. These revelations will undoubtedly force the company to evolve their data sharing and protection strategy and policy.

More importantly, it’s a call to action: We need a code of ethics.

As the AI revolution continues to accelerate, new technology is being developed to solve key problems faced by consumers, businesses and the world at large. It is the next stage of evolution for countless industries, from security and enterprise to retail and healthcare. I believe that in the near future, almost all new technology will incorporate some form of AI or machine learning, enabling humans to interact with data and devices in ways we can’t yet imagine.

Moving forward, our reliance on AI will deepen, inevitably causing many ethical issues to arise as humans turn over to algorithms their cars, homes and businesses. These issues and their consequences will not discriminate, and the impact will be far-reaching — affecting everyone, including public citizens, small businesses utilizing AI or entrepreneurs developing the latest tech. No one will be left untouched. I am aware of a few existing initiatives focused on more research, best practices and collaboration; however, it’s clear that there’s much more work to be done. 

For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions.

Researchers, entrepreneurs and global organizations must lay the groundwork for a code of AI ethics to guide us through these upcoming breakthroughs and inevitable dilemmas. I should clarify that this won’t be a single code of ethics — each company and industry will have to come up with their own unique guidelines.

For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions. I do not have the answers to these questions right now, but my goal is to bring more awareness to this topic, along with simple common sense, and work toward a solution. Here are some of the issues related to AI and automation that keep me up at night.

The ethics of driverless cars

With the invention of the car came the invention of the car accident. Similarly, an AI-augmented car will bring with it ethical and business implications that we must be prepared to face. Researchers and programmers will have to ask themselves what safety and mobility trade-offs are inherent in autonomous vehicles.

Ethical challenges will unfold as algorithms are developed that impact how humans and autonomous vehicles interact. Should these algorithms be transparent? For example, will a car rear-end an abruptly stopped car or swerve and hit a dog on the side of the street? Key decisions will be made by a fusion processor in split seconds, running AI, connecting a car’s vast array of sensors. Will entrepreneurs and small businesses be kept in the dark while these algorithms dominate the market?

Driverless cars will also transform the way consumers behave. Companies will need to anticipate this behavior and offer solutions to fill those gaps. Now is the time to start predicting how this technology will change consumer needs and what products and services can be created to meet them.

The battle against fake news

As our news media and social platforms become increasingly AI-driven, businesses from startups to global powerhouses must be aware of their ethical implications and choose wisely when working this technology into their products.

We’re already seeing AI being used to create and defend against political propaganda and fake news. Meanwhile, dark money has been used for social media ads that can target incredibly specific populations in an attempt to influence public opinion or even political elections. What happens when we can no longer trust our news sources and social media feeds?

AI will continue to give algorithms significant influence over what we see and read in our daily lives. We have to ask ourselves how much trust we can put in the systems that we’re creating and how much power we can give them. I think it’s up to companies like Facebook, Google and Twitter — and future platforms — to put safeguards in place to prevent them from being misused. We need the equivalent of Underwriters Laboratories (UL) for news!

The future of the automated workplace

Companies large and small must begin preparing for the future of work in the age of automation. Automation will replace some labor and enhance other jobs. Many workers will be empowered with these new tools, enabling them to work more quickly and efficiently. However, many companies will have to account for the jobs lost to automation.

Businesses should begin thinking about what labor may soon be automated and how their workforce can be utilized in other areas. A large portion of the workforce will have to be trained for new jobs created by automation in what is becoming commonly referred to as collaborative automation. The challenge will come when deciding on how to retrain and redistribute employees whose jobs have been automated or augmented. Will it be the government, employers or automation companies? In the end, these sectors will need to work together as automation changes the landscape of work.

No one will be left untouched.

It’s true that AI is the next stage of tech evolution, and that it’s everywhere. It has become portable, accessible and economical. We have now, finally, reached the AI tipping point. But that point is on a precarious edge, see-sawing somewhere between an AI dreamland and an AI nightmare.

In order to surpass the AI hype and take advantage of its transformative powers, it’s essential that we get AI right, starting with the ethics. As entrepreneurs rush to develop the latest AI tech or use it to solve key business problems, each has a responsibility to consider the ethics of this technology. Researchers, governments and businesses must cooperatively develop ethical guidelines that help to ensure a responsible use of AI to the benefit of all.

From driverless cars to media platforms to the workplace, AI is going to have a significant impact on how we live our lives. But as AI thought leaders and experts, we shouldn’t just deliver the technology — we need to closely monitor it and ask the right questions as the industry evolves.

It has never been a more exciting time to be an entrepreneur in the rise of AI, but there’s a lot of work to be done now and in the future to ensure we’re using the technology responsibly.

Breaking down France’s new $76M Africa startup fund

Weeks after French President Emmanuel Macron unveiled a $76M African startup fund at VivaTech 2018, TechCrunch paid a visit to the French Development Agency (AFD) — who will administer the new fund — to get more details on how le noveau fonds will work.

The $76M (or €65M) will divvy up into three parts, according to AFD Digital Task Team Leader Christine Ha.

“There are €10M [$11.7M] for technical assistance to support the African ecosystem… €5M will be available as interest free loans to high potential, pre seed startups…and…€50M [$58M] will be for equity-based investments in series A to C startups,” explained Ha during a meeting in Paris.

The technical assistance will distribute in the form of grants to accelerators, hubs, incubators, and coding programs. The pre-seed startup loans will issue in amounts up to $100K “as early, early funding to allow entrepreneurs to prototype, launch, and experiment,” said Ha.

The $58M in VC startup funding will be administered through Proparco, a development finance institution—or DFI—partially owned by the AFD. The money will come “from Proparco’s balance sheet”…and a portion “will be invested in VC funds active on the continent,” said Ha.

Proparco already invests in Africa focused funds such as TLcom Capital and Partech Ventures. “Proparco will take equity stakes, and will be a limited partner when investing in VC funds,” said Ha.

Startups from all African countries can apply for a piece of the $58M by contacting any of Proparco’s Africa offices (including in Casablanca, Abidjan, Douala, Lagos, Nairobi, Johannesburg).

And what will AFD (and Proparco) look for in African startup candidates? “We are targeting young and innovative companies able to solve problems in terms of job creation, access to financial services, energy, health, education and affordable goods and services…[and] able to scale up their venture on the continent,” said Ha.

The $11.7M technical assistance and $5.8M loan portions of France’s new fund will be available starting 2019. On implementation, AFD is still “reviewing several options…such as relying on local actors through [France’s] Digital Africa platform,” said Ha.

Digital Africa­—a broader French government initiative to support the African tech ecosystem—will launch a new online platform in November 2018 with resources for startup entrepreneurs.

So that’s the skinny on France’s new Africa fund. It adds to a load of VC announced for the continent in less than 15 months, including $70 for Partech Ventures, TPG Growth’s $2BN Rise Fund, and $40M at TLcom Capital

Though $75M (and these other amounts) may pale compared to Silicon Valley VC values, it’s a lot for a startup scene that — at rough estimate—attracted only $400M four years ago.  African tech entrepreneurs, you now have a lot more global funding options, including from France.

Blockchain technology could be the great equalizer for American cities

The city of Austin is currently piloting a program in which its 2,000 homeless residents will be given a unique identifier that’s safely and securely recorded on the blockchain. This identifier will help individuals consolidate their records and seek out crucial services. Service providers will also be able to access the information. If successful, we’ll have a new, more efficient way to communicate and ensure that the right people are at the table to help the homeless.

in Austin and around the country, it seems that blockchain technology is opening a range of opportunities for city service delivery and operations.

At its core, blockchain is a secure, inalterable electronic register. Serving as a shared database or distributed ledger, it is located permanently online for anything represented digitally, such as rights, goods and property. Through enhanced trust, consensus and autonomy, blockchain brings widespread decentralization to transactions.

At the municipal level, blockchain has the potential to create countless smart networks and grids, altering how we do everything from vote and build credit to receive energy. In many ways, it could be a crucial component of what is needed to circumvent outdated systems and build long-lasting solutions for cities.

AUSTIN, TX – APRIL 14: A homeless man stands outside in front of a colorful wall mural at the Flat Track Coffee Shop on Cesar Chavez Blvd on April 14, 2017, in Austin, Texas. Austin, the State Capital of Texas, the state’s second largest city, and home to South By Southwest, has been experiencing a bustling building boom based on government, tourism, and high tech business. (Photo by George Rose/Getty Images)

As Motherboard has previously reported, it’s a “rich getting richer” situation. But if it’s good enough for the wealthy, why can’t it be adequate to help the poorer, more vulnerable members of the population?

Consider, for a moment, that it might be a major player in the more inclusive future we’ve always wanted.

Arguably, we have a lot of work to do. According to new research, 43 percent of families struggle to afford basics like food and housing. These populations are perhaps the ones who stand to gain the most from blockchain, the Internet of Things (IoT) and the advent of smart cities — if done right.

Smart city technology is growing ever more common here in the US and around the world. Our research shows that 66% of cities have invested in some sort of smart city technological infrastructure that enables them to collect, aggregate and analyze real-time data to improve the lives of residents. Smart cities are already showing great promise in many ways to improve the lives of people who live in cities.

Take, for instance, electricity. With the help of blockchain, we can turn microgrids into a reality on a macro scale, enabling communities to more easily embrace solar power and other more sustainable sources, which in turn will result in fewer emissions and lower healthcare costs and rates of disease. But in the more immediate future, blockchain-enabled microgrids would allow consumers to join a power “exchange” in which they can sell their surplus energy. In many scenarios, the consumers’ bills would either significantly drop, or they’d earn money.

Then there’s the question of building credit. It should be no surprise that the poor are the most likely to have debt and unpaid bills and, therefore, bad credit. They are also the most likely to be “unbanked,” as in they don’t use banks at all. In fact, seven percent of Americans don’t use banks. But with blockchain, we can design an alternate way to build and track transactions.

And, of course, there is voting — an issue that, more than ever, is vital to a thriving democracy. The US has lower voter turnout than just about every other developed country. In fact, just over half of voting-age Americans voted in 2016. We don’t talk enough about how important civic engagement — and holding politicians accountable — is for making the playing field fairer. We do, however, talk about what it would be like to be able to email our votes from the comfort of our home computer or smartphone. While email isn’t nearly secure enough for selecting our leaders, being able to vote from home is something we could — and should — aim to do.

UNITED STATES – DECEMBER 11: Voters exit the polling station at the Jefferson County Courthouse in Birmingham, Ala., on Tuesday, Dec. 12, 2017, after voting in the special election to fill Jeff Sessions’ seat in the U.S. Senate. (Photo By Bill Clark/CQ Roll Call)

Blockchain is proving to be a secure enough system to make this a reality. The result could be more youth, communities of color and disabled voters “showing up” to the polls. These online polls would be more “hack proof” — another contemporary concern — and votes could be counted in real time. Imagine never again going to bed thinking one candidate had won a race but waking up to find it was actually someone else.

Where will we go next with blockchain and what can this powerful new tool do for cities? Our latest National League of Cities report, Blockchain in Cities, provides mayors and other local officials with some clues. The research not only explores how cities can use blockchain now, but also how it will be used in the future to enable technology like autonomous vehicles that can “talk” to each other. These types of use cases — plus existing opportunities from blockchain—could potentially be transformative for municipal operations.

Blockchain is far more than just cryptocurrency. In time, blockchain could turn American society on its head, and at the same time make our major institutions, and the places we live, more inclusive. Cities — and in some cases states — are the places where this will be piloted. By developing smarter cities and utilizing blockchain as a secure resource, city leaders can provide community members with the tools they need for success.

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

VCs serve up a large helping of cash to startups disrupting food

Here is what your daily menu might look like if recently funded startups have their way.

You’ll start the day with a nice, lightly caffeinated cup of cheese tea. Chase away your hangover with a cold bottle of liver-boosting supplement. Then slice up a few strawberries, fresh-picked from the corner shipping container.

Lunch is full of options. Perhaps a tuna sandwich made with a plant-based, tuna-free fish. Or, if you’re feeling more carnivorous, grab a grilled chicken breast fresh from the lab that cultured its cells, while crunching on a side of mushroom chips. And for extra protein, how about a brownie?

Dinner might be a pizza so good you send your compliments to the chef — only to discover the chef is a robot. For dessert, have some gummy bears. They’re high in fiber with almost no sugar.

Sound terrifying? Tasty? Intriguing? If you checked tasty and intriguing, then here is some good news: The concoctions highlighted above are all products available (or under development) at food and beverage startups that have raised venture and seed funding this past year.

These aren’t small servings of capital, either. A Crunchbase News analysis of venture funding for the food and beverage category found that startups in the space gobbled up more than $3 billion globally in disclosed investment over the past 12 months. That includes a broad mix of supersize deals, tiny seed rounds and everything in-between.

Spending several hours looking at all these funding rounds leaves one with a distinct sense that eating habits are undergoing a great deal of flux. And while we can’t predict what the menu of the future will really hold, we can highlight some of the trends. For this initial installment in our two-part series, we’ll start with foods. Next week, we’ll zero in on beverages.

Chickenless nuggets and fishless tuna

For protein lovers disenchanted with commercial livestock farming, the future looks good. At least eight startups developing plant-based and alternative proteins closed rounds in the past year, focused on everything from lab meat to fishless fish to fast-food nuggets.

New investments add momentum to what was already a pretty hot space. To date, more than $600 million in known funding has gone to what we’ve dubbed the “alt-meat” sector, according to Crunchbase data. Actual investment levels may be quite a bit higher since strategic investors don’t always reveal round size.

In recent months, we’ve seen particularly strong interest in the lab-grown meat space. At least three startups in this area — Memphis Meats, SuperMeat and Wild Type — raised multi-million dollar rounds this year. That could be a signal that investors have grown comfortable with the concept, and now it’s more a matter of who will be early to market with a tasty and affordable finished product.

Makers of meatless versions of common meat dishes are also attracting capital. Two of the top funding recipients in our data set include Seattle Food Tech, which is working to cost-effectively mass-produce meatless chicken nuggets, and Good Catch, which wants to hook consumers on fishless seafoods. While we haven’t sampled their wares, it does seem like they have chosen some suitable dishes to riff on. After all, in terms of taste, both chicken nuggets and tuna salad are somewhat removed from their original animal protein sources, making it seemingly easier to sneak in a veggie substitute.

Robot chefs

Another trend we saw catching on with investors is robot chefs. Modern cooking is already a gadget-driven process, so it’s not surprising investors see this as an area ripe for broad adoption.

Pizza, the perennial takeout favorite, seems to be a popular area for future takeover by robots, with at least two companies securing rounds in recent months. Silicon Valley-based Zume, which raised $48 million last year, uses robots for tasks like spreading sauce and moving pies in and out of the oven. France’s EKIM, meanwhile, recently opened what it describes as a fully autonomous restaurant staffed by pizza robots cooking as customers watch.

Salad, pizza’s healthier companion side dish, is also getting roboticized. Just this week, Chowbotics, a developer of robots for food service whose lineup includes Sally the salad robot, announced an $11 million Series A round.

Those aren’t the only players. We’ve put together a more complete list of recently launched or funded robot food startups here.

Beyond sugar

Sugar substitutes aren’t exactly a new area of innovation. Diet Rite, often credited as the original diet soda, hit the market in 1958. Since then, we’ve had 60 years of mass-marketing for low-calorie sweeteners, from aspartame to stevia.

It’s not over. In recent quarters, we’ve seen a raft of funding rounds for startups developing new ways to reduce or eliminate sugar in many of the foods we’ve come to love. On the dessert and candy front, Siren Snacks and SmartSweets are looking to turn favorite indulgences like brownies and gummy bears into healthy snack options.

The quest for good-for-you sugar also continues. The latest funding recipient in this space appears to be Bonumuse, which is working to commercialize two rare sugars, Tagatose and Allulose, as lower-calorie and potentially healthier substitutes for table sugar. We’ve compiled a list of more sugar-reduction-related startups here.

Where is it all headed?

It’s tough to tell which early-stage food startups will take off and which will wind up in the scrap bin. But looking in aggregate at what they’re cooking up, it looks like the meal of the future will be high in protein, low in sugar and prepared by a robot.

The problem with ‘explainable AI’

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems’ decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make.

On the algorithmic side, grandstanding by IBM and other tech giants around the idea of “explainable AI” is nothing but virtue signaling that has no basis in reality. I am not aware, for instance, of any place where IBM has laid bare the inner workings of Watson — how do those algorithms work? Why do they make the recommendations/predictions they do?

There are two issues with the idea of explainable AI. One is a definition: What do we mean by explainability? What do we want to know? The algorithms or statistical models used? How learning has changed parameters throughout time? What a model looked like for a certain prediction? A cause-consequence relationship with human-intelligible concepts?

Each of these entail different levels of complexity. Some of them are pretty easy — someone had to design the algorithms and data models so they know what they used and why. What these models are, is also pretty transparent. In fact, one of the refreshing facets of the current AI wave is that most of the advancements are made in peer-reviewed papers — open and available to everyone.

What these models mean, however, is a different story. How these models change and how they work for a specific prediction can be checked, but what they mean is unintelligible for most of us. It would be like buying an iPad that had a label on the back explaining how a microprocessor and touchscreen works — good luck! And then, adding the layer of addressing human-intelligible causal relationships, well that’s a whole different problem.

Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software.

The second overarching factor when considering explainable AI is assessing the trade-offs of “true explainable and transparent AI.” Currently there is a trade-off in some tasks between performance and explainability, in addition to business ramifications. If all the inner workings of an AI-powered platform were publicly available, then intellectual property as a differentiator is gone.

Imagine if a startup created a proprietary AI system, for instance, and was compelled to explain exactly how it worked, to the point of laying it all out — it would be akin to asking that a company disclose its source code. If the IP had any value, the company would be finished soon after it hit “send.” That’s why, generally, a push for those requirements favor incumbents that have big budgets and dominance in the market and would stifle innovation in the startup ecosystem.

Please don’t misread this to mean that I’m in favor of “black box” AI. Companies should be transparent about their data and offer an explanation about their AI systems to those who are interested, but we need to think about the societal implications of what that is, both in terms of what we can do and what business environment we create. I am all for open source, and transparency, and see AI as a transformative technology with a positive impact. By putting such a premium on transparency, we are setting a very high burden for what amounts to an infant but high-potential industry.

US startups off to a strong M&A run in 2018

With Microsoft’s $7.5 billion acquisition of GitHub this week, we can now decisively declare a trend: 2018 is shaping up as a darn good year for U.S. venture-backed M&A.

So far this year, acquirers have spent just over $20 billion in disclosed-price purchases of U.S. VC-funded companies, according to Crunchbase data. That’s about 80 percent of the 2017 full-year total, which is pretty impressive, considering we’re barely five months into 2018.

If one included unreported purchase prices, the totals would be quite a bit higher. Fewer than 20 percent of acquisitions in our data set came with reported prices.1 Undisclosed prices are mostly for smaller deals, but not always. We put together a list of a dozen undisclosed price M&A transactions this year involving companies snapped up by large-cap acquirers after raising more than $20 million in venture funding.

The big deals

The deals that everyone talks about, however, are the ones with the big and disclosed price tags. And we’ve seen quite a few of those lately.

As we approach the half-year mark, nothing comes close to topping the GitHub deal, which ranks as one of the biggest acquisitions of a private, U.S. venture-backed company ever. The last deal to top it was Facebook’s $19 billion purchase of WhatsApp in 2014, according to Crunchbase.

Of course, GitHub is a unique story with an astounding growth trajectory. Its platform for code development, most popular among programmers, has drawn 28 million users. For context, that’s more than the entire population of Australia.

Still, let’s not forget about the other big deals announced in 2018. We list the top six below:

Flatiron Health, a provider of software used by cancer care providers and researchers, ranks as the second-biggest VC-backed acquisition of 2018. Its purchaser, Roche, was an existing stakeholder who apparently liked what it saw enough to buy up all remaining shares.

Next up is job and employer review site Glassdoor, a company familiar to many of those who’ve looked for a new post or handled hiring in the past decade. The 11-year-old company found a fan in Tokyo-based Recruit Holdings, a provider of recruitment and human resources services that also owns leading job site Indeed.com.

Meanwhile, Impact Biomedicines, a cancer therapy developer that sold to Celgene for $1.1 billion, could end up delivering an even larger exit. The acquisition deal includes potential milestone payments approaching nearly $6 billion.

Deal counts look flat

Not all metrics are trending up, however. While acquirers are doing bigger deals, they don’t appear to be buying a larger number of startups.

Crunchbase shows 216 startups in our data set that sold this year. That’s roughly on par with the pace of dealmaking in the year-ago period, which had 222 M&A exits using similar parameters. (For all of 2017, there were 508 startup acquisitions that met our parameters.2)

Below, we look at M&A counts for the past five calendar years:

Looking at prior years for comparison, the takeaway seems to be that M&A deal counts for 2018 look just fine, but we’re not seeing a big spike.

What’s changed?

The more notable shift from 2017 seems to be buyers’ bigger appetite for unicorn-scale deals. Last year, we saw just one acquisition of a software company for more than a billion dollars — Cisco’s $3.7 billion purchase of AppDynamics — and that was only after the performance management software provider filed to go public. The only other billion-plus deal was PetSmart’s $3.4 billion acquisition of pet food delivery service Chewy, which previously raised early venture funding and later private equity backing.

There are plenty of reasons why acquirers could be spending more freely this year. Some that come to mind: Stock indexes are chugging along, and U.S. legislators have slashed corporate tax rates. U.S. companies with large cash hordes held overseas, like Apple and Microsoft, also received new financial incentives to repatriate that money.

That’s not to say companies are doing acquisitions for these reasons. There’s no obligation to spend repatriated cash in any particular way. Many prefer share buybacks or sitting on piles of money. Nonetheless, the combination of these two things — more money and less uncertainty around tax reform — are certainly not a bad thing for M&A.

High public valuations, particularly for tech, also help. Microsoft shares, for instance, have risen by more than 44 percent in the past year. That means that it took about a third fewer shares to buy GitHub this month than it would have a year ago. (Of course, GitHub’s valuation probably rose as well, but we’ll ignore that for now.)

Paying retail

Overall, this is not looking like an M&A market for bargain hunters.

Large-cap acquirers seem willing to pay retail price for startups they like, given the competitive environment. After all, the IPO window is wide open. Plus, fast-growing unicorns have the option of staying private and raising money from SoftBank or a panoply of other highly capitalized investors.

Meanwhile, acquirers themselves are competing for desirable startups. Microsoft’s winning bid for GitHub reportedly followed overtures by Google, Atlassian and a host of other would-be buyers.

But even in the most buoyant climate, one rule of acquiring remains true: It’s hard to turn down $7.5 billion.

  1. The data set included companies that have raised $1 million or more in venture or seed funding, with their most recent round closing within the past five years.
  2. For the prior year comparisons, including the chart, the data set consisted of companies acquired in a specified year that raised $1 million or more in venture or seed funding, with their most recent round closing no more than five years before the middle of that year.

FCC shrugs at fake cell towers around the White House

Turns out, Ajit Pai was serious last year when he told lawmakers that the FCC didn't want anything to do with cybersecurity.

This past April the Associated Press reported "For the first time, the U.S. government has publicly acknowledged the existen…