AI could help push Neo4j graph database growth

Graph databases have always been useful to help find connections across a vast data set, and it turns out that capability is quite handy in artificial intelligence and machine learning too. Today, Neo4j, the makers of the open source and commercial graph database platform, announced the release of Neo4j 3.5, which has a number of new features aimed specifically at AI and machine learning.

Neo4j founder and CEO Emil Eifrem says he had recognized the connection between AI and machine learning and graph databases for awhile, but he says that it has taken some time for the market to catch up to the idea.

“There has been a lot momentum around AI and graphs…Graphs are very fundamental to AI. At the same time we were seeing some early use cases, but not really broad adoption, and that’s what we’re seeing right now,” he explained.

AI graph uses cases. Graphic: Neo4j

To help advance AI uses cases, today’s release includes a new full text search capability, which Eifrem says has been one of the most requested features. This is important because when you are making connections between entities, you have to be able to find all of the examples regardless of how it’s worded — for example, human versus humans versus people.

Part of that was building their own indexing engine to increase indexing speed, which becomes essential with ever more data to process. “Another really important piece of functionality is that we have improved our data ingestion very significantly. We have 5x end-to-end performance improvements when it comes to importing data. And this is really important for connected feature extraction, where obviously, you need a lot of data to be able to train the machine learning,” he said. That also means faster sorting of data too.

Other features in the new release include improvements to the company’s own Cypher database query language and better visualization of the graphs to give more visibility, which is useful for visualizing how machine learning algorithms work, which is known as AI explainability. They also announced support for the Go language and increased security.

Graph databases are growing increasingly important as we look to find connections between data. The most common use case is the knowledge graph, which is what lets us see connections in a huge data sets. Common examples include who we are connected to on a social network like Facebook, or if we bought one item, we might like similar items on an ecommerce site.

Other use cases include connected feature extraction, a common machine learning training techniques that can look at a lot of data and extract the connections, the context and the relationships for a particular piece of data, such as suspects in a criminal case and the people connected to them.

Neo4j has over 300 large enterprise customers including Adobe, Microsoft, Walmart, UBS and NASA. The company launched in 2007 and has raised $80 million. The last round was $36 million in November 2016.

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Clinc expands to automotive, releases SaaS platform for its voice AI assistant

Clinc co-founder and CEO Jason Mars just announced the company is expanding to a third vertical: Automotive. The company, which started in fintech and recently unveiled a product for drive-thru restaurants, is aiming its voice AI service at the automotive industry. The idea is to give automakers a platform that they can integrate into their vehicles that will allow drivers to control and interact with their vehicles through natural language.

Launching alongside the new product, Clinc also revealed a platform to give developers access to the conversational AI. The company says it’s easy enough for developers with little to no experience in machine learning to build with Clinc’s products.

Clinc’s conversational AI is fantastic and the company’s products in other verticals show that if it’s used by automakers, the technology could usher in a new wave of user interfaces. This is not Siri.

The company was founded in Ann Arbor, Michigan in 2015 with a solution for fintech and currently has several contracts with major banks such as USAA, Barclays and S&P Global. In most cases, when integrated into the bank’s system, Clinc’s technology emulates human intelligence and can interpret unstructured, unconstrained speech. The idea is to let users converse with their bank account using natural language without pre-defined templates or hierarchical voice menus. The company says it works in any language.

With its new developer platform, companies can use Clinc’s system to integrate the company’s natural language processing into their products.

“We’re thrilled to be democratizing the world’s most powerful conversational AI and to be empowering people to solve important problems and to create amazing things,” said Dr. Jason Mars, Clinc CEO. “We’ve taken the complexity out of machine learning infrastructure and we’re giving developers the keys to our AI brain to create and deploy their own customizable virtual assistants.”

Storage provider Cloudian raises $94M

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, no matter whether that’s from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Incentivai launches to simulate how hackers break blockchains

Cryptocurrency projects can crash and burn if developers don’t predict how humans will abuse their blockchains. Once a decentralized digital economy is released into the wild and the coins start to fly, it’s tough to implement fixes to the smart contracts that govern them. That’s why Incentivai is coming out of stealth today with its artificial intelligence simulations that test not just for security holes, but for how greedy or illogical humans can crater a blockchain community. Crypto developers can use Incentivai’s service to fix their systems before they go live.

“There are many ways to check the code of a smart contract, but there’s no way to make sure the economy you’ve created works as expected,” says Incentivai’s solo founder Piotr Grudzień. “I came up with the idea to build a simulation with machine learning agents that behave like humans so you can look into the future and see what your system is likely to behave like.”

Incentivai will graduate from Y Combinator next week and already has a few customers. They can either pay Incentivai to audit their project and produce a report, or they can host the AI simulation tool like a software-as-a-service. The first deployments of blockchains it’s checked will go out in a few months, and the startup has released some case studies to prove its worth.

“People do theoretical work or logic to prove that under certain conditions, this is the optimal strategy for the user. But users are not rational. There’s lots of unpredictable behavior that’s difficult to model,” Grudzień explains. Incentivai explores those illogical trading strategies so developers don’t have to tear out their hair trying to imagine them.

Protecting crypto from the human x-factor

There’s no rewind button in the blockchain world. The immutable and irreversible qualities of this decentralized technology prevent inventors from meddling with it once in use, for better or worse. If developers don’t foresee how users could make false claims and bribe others to approve them, or take other actions to screw over the system, they might not be able to thwart the attack. But given the right open-ended incentives (hence the startup’s name), AI agents will try everything they can to earn the most money, exposing the conceptual flaws in the project’s architecture.

“The strategy is the same as what DeepMind does with AlphaGo, testing different strategies,” Grudzień explains. He developed his AI chops earning a masters at Cambridge before working on natural language processing research for Microsoft.

Here’s how Incentivai works. First a developer writes the smart contracts they want to test for a product like selling insurance on the blockchain. Incentivai tells its AI agents what to optimize for and lays out all the possible actions they could take. The agents can have different identities, like a hacker trying to grab as much money as they can, a faker filing false claims or a speculator that cares about maximizing coin price while ignoring its functionality.

Incentivai then tweaks these agents to make them more or less risk averse, or care more or less about whether they disrupt the blockchain system in its totality. The startup monitors the agents and pulls out insights about how to change the system.

For example, Incentivai might learn that uneven token distribution leads to pump and dump schemes, so the developer should more evenly divide tokens and give fewer to early users. Or it might find that an insurance product where users vote on what claims should be approved needs to increase its bond price that voters pay for verifying a false claim so that it’s not profitable for voters to take bribes from fraudsters.

Grudzień has done some predictions about his own startup too. He thinks that if the use of decentralized apps rises, there will be a lot of startups trying to copy his approach to security services. He says there are already some doing token engineering audits, incentive design and consultancy, but he hasn’t seen anyone else with a functional simulation product that’s produced case studies. “As the industry matures, I think we’ll see more and more complex economic systems that need this.”

Google gives its AI the reins over its data center cooling systems

The inside of data centers is loud and hot — and keeping servers from overheating is a major factor in the cost of running them. It’s no surprise then that the big players in this space, including Facebook, Microsoft and Google, all look for different ways of saving cooling costs. Facebook uses cool outside air when possible, Microsoft is experimenting with underwater data centers and Google is being Google and looking to its AI models for some extra savings.

A few years ago, Google, through its DeepMind affiliate, started looking into how it could use machine learning to provide its operators some additional guidance on how to best cool its data centers. At the time, though, the system only made recommendations and the human operators decided whether to implement them. Those humans can now take longer naps during the afternoon, because the team has decided the models are now good enough to give the AI-powered system full control over the cooling system. Operators can still intervene, of course, but as long as the AI doesn’t decide to burn the place down, the system runs autonomously.

The new cooling system is now in place in a number of Google’s data centers. Every five minutes, the system polls thousands of sensors inside the data center and chooses the optimal actions based on this information. There are all kinds of checks and balances here, of course, so the chances of one of Google’s data centers going up in flames because of this is low.

Like most machine learning models, this one also became better as it gathered more data. It’s now delivering energy savings of 30 percent on average, compared to the data centers’ historical energy usage.

One thing that’s worth noting here is that Google is obviously trying to save a few bucks, but in many ways, the company is also looking at this as a way of promoting its own machine learning services. What works in a data center, after all, should also work in a large office building. “In the long term, we think there’s potential to apply this technology in other industrial settings and help tackle climate change on an even grander scale,” DeepMind writes in today’s announcement.

This bipedal robot has a flying head

Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?

University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.

“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.

The robot is similar to the bizaree-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.