Google launches its first WeChat mini program as its China experiments continue

Google is continuing to test new strategies in China after the U.S. search giant released its first mini program for WeChat, the country’s hugely popular messaging app.

WeChat is used by hundreds of millions of Chinese people daily for services that stretch beyond chat to include mobile payments, bill paying, food delivery and more. Tencent, the company that operates WeChat, added mini programs last year and they effectively operate like apps that are attached to the service. That means that users bypass Google Play or Apple’s App Store and install them from WeChat.

Earlier this year, Tencent added support for games — “mini games” — and the Chinese firm recently said that over one million mini programs have been created to date. Engagement is high, with some 500 million WeChat users interacting with at least one each month.

WeChat has become the key distribution channel in China and that’s why Google is embracing it with its first mini program — 猜画小歌, a game that roughly translates to ‘Guess My Sketch.’ There’s no English announcement but the details can be found in this post on Google’s Chinese blog, which includes the QR code to scan to get the game.

The app is a take on games like Zynga’s Draw Something, which puts players into teams to guess what the other is drawing. Google, however, is adding a twist. Each player teams up with an AI and then battles against their friends and their AIs. You can find an English version of the game online here.

Google’s first WeChat mini program is a sketching game that uses AI

The main news here isn’t the game, of course, but that Google is embracing mini programs, which have been christened as a threat to the Google Play Store itself.

‘When in China… play by local rules’ and Google has taken that to heart this year.

The company recently introduced a Chinese version of its Files Go Android device management app which saw it join forces with four third-party app stores in China in order to gain distribution. This sketching game has lower ambitions but, clearly, it’ll be a learning experience for Google that might prompt it to introduce more significant apps or services via WeChat in the future.

Indeed, Google has been cozying up to Tencent lately after inking a patent deal with the Chinese internet giant, investing in its close ally JD.com and combining on investment deals together, including biotech startup XtalPi.

That’s one side of a new initiative to be more involved in China, where it has been absent since 2010 after redirecting its Chinese search service to Hong Kong in the face of government pressure. In other moves, it has opened an AI lab in Beijing and a more modest office in Shenzhen while it is bringing its startup demo day event to China for the first time with a Shanghai event in September.

Finally, in a touch of irony, Google’s embrace of WeChat’s ‘app store-killing’ mini programs platform comes just hours before the EU is expected to levy a multibillion-euro penalty onit for abusing its dominant position on mobile via Android.

Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.

Facebook reportedly hires an AI chip head from Google

Facebook is continuing to devote more resources to the development of AI-focused chips, bringing aboard a senior director of engineering from Google who worked on chips for Google’s products to lead its efforts, Bloomberg reports.

We’ve reached out to Google and Facebook for confirmation.

Shahriar Rabii spent nearly seven years at Google before joining Facebook this month as its VP and Head of Silicon according to his LinkedIn profile.

Facebook’s work on AI-focused custom silicon has been the topic of rumors and reports over the past several months. It’s undoubtedly a bold direction for the company though it’s unclear how interested Facebook is in creating custom silicon for consumer devices or if they’re more focused on building for their server business as they also look to accelerate their own research efforts.

Rabii’s work at Google seemed to encompass a good deal of work on chips for consumer devices, specifically work on the Pixel 2’s Visual Core chip which brought machine learning intelligence to the device’s camera.

Facebook has long held hardware ambitions but its Building 8 hardware division appears to be closer than ever to shipping its first products as the company’s rumored work on an Echo Show competitor touchscreen smart speaker continues. Meanwhile, Facebook has also continued building virtual reality hardware built on Qualcomm’s mobile chipsets.

As Silicon Valley’s top tech companies continue to compete aggressively for talent amongst artificial intelligence experts, this marks another departure from Google. Earlier this year, Apple poached Google’s AI head.

As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation

Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.

Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.

Smith posits this nightmare scenario:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).

In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.

But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.

The fact is, something does, indeed, need to be done.

As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”

All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.

In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.

There’s really no “nice” way to acknowledge these things.

Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.

Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.

As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”

While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.

That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.

But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.

We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

Machine learning boosts Swiss startup’s shot at human-powered land speed record

The current world speed record for riding a bike down a straight, flat road was set in 2012 by a Dutch team, but the Swiss have a plan to topple their rivals — with a little help from machine learning. An algorithm trained on aerodynamics could streamline their bike, perhaps cutting air resistance by enough to set a new record.

Currently the record is held by Sebastiaan Bowier, who in 2012 set a record of 133.78 km/h, or just over 83 mph. It’s hard to imagine how his bike, which looked more like a tiny landbound rocket than any kind of bicycle, could be significantly improved on.

But every little bit counts when records are measured down a hundredth of a unit, and anyway, who knows but that some strange new shape might totally change the game?

To pursue this, researchers at the École Polytechnique Fédérale de Lausanne’s Computer Vision Laboratory developed a machine learning algorithm that, trained on 3D shapes and their aerodynamic qualities, “learns to develop an intuition about the laws of physics,” as the university’s Pierre Baqué said.

“The standard machine learning algorithms we use to work with in our lab take images as input,” he explained in an EPFL video. “An image is a very well-structured signal that is very easy to handle by a machine-learning algorithm. However, for engineers working in this domain, they use what we call a mesh. A mesh is a very large graph with a lot of nodes that is not very convenient to handle.”

Nevertheless, the team managed to design a convolutional neural network that can sort through countless shapes and automatically determine which should (in theory) provide the very best aerodynamic profile.

“Our program results in designs that are sometimes 5-20 percent more aerodynamic than conventional methods,” Baqué said. “But even more importantly, it can be used in certain situations that conventional methods can’t. The shapes used in training the program can be very different from the standard shapes for a given object. That gives it a great deal of flexibility.”

That means that the algorithm isn’t just limited to slight variations on established designs, but it also is flexible enough to take on other fluid dynamics problems like wing shapes, windmill blades or cars.

The tech has been spun out into a separate company, Neural Concept, of which Baqué is the CEO. It was presented today at the International Conference on Machine Learning in Stockholm.

A team from the Annecy University Institute of Technology will attempt to apply the computer-honed model in person at the World Human Powered Speed Challenge in Nevada this September — after all, no matter how much computer assistance there is, as the name says, it’s still powered by a human.

A new hope: AI for news media

To put it mildly, news media has been on the sidelines in AI development. As a consequence, in the age of AI-powered personalized interfaces, the news organizations don’t anymore get to define what’s real news, or, even more importantly, what’s truthful or trustworthy. Today, social media platforms, search engines and content aggregators control user flows to the media content and affect directly what kind of news content is created. As a result, the future of news media isn’t anymore in its own hands. Case closed?

The (Death) Valley of news digitalization

There’s a history: News media hasn’t been quick or innovative enough to become a change maker in the digital world. Historically, news used to be the signal that attracted and guided people (and advertisers) in its own right. The internet and the exponential explosion of available information online changed that for good.

In the early internet, the portals channeled people to the content in which they were interested. Remember Yahoo? As the amount of information increased, the search engine(s) took over, changing the way people found relevant information and news content online. As the mobile technologies and interfaces started to get more prominent, social media with News Feed and tweets took over, changing again the way people discovered media content, now emphasizing the role of our social networks.

Significantly, news media didn’t play an active role in any of these key developments. Quite the opposite, it was late in utilizing the rise of the internet, search engines, content aggregators, mobile experience, social media and other new digital solutions to its own benefit.

The ad business followed suit. First news organizations let Google handle searches on their websites and the upcoming search champion got a unique chance to index media content. With the rise of social media, news organizations, especially in the U.S., turned to Facebook and Twitter to break the news rather than focusing on their own breaking news features. As a consequence, news media lost its core business to the rising giants of the new digital economy.

To put it very strongly, news media hasn’t ever been fully digital in its approach to user experience, business logic or content creation. Think paywalls and e-newspapers for the iPad! The internet and digitalization forced the news media to change, but the change was reactive, not proactive. The old, partly obsolete, paradigms of content creation, audience understanding, user experience and content distribution still actively affect the way news content is created and distributed today (and to be 110 percent clear — this is not about the storytelling and the unbelievable creativity and hard work done by ingenious journalists all around the globe).

Due to these developments, today’s algorithmic gatekeepers like Google and Facebook dominate the information flows and the ad business previously dominated by the news media. Significantly, personalization and the ad-driven business logic of today’s internet behemoths isn’t designed to let the news media flourish on its own terms ever again.

From observers to change makers

News media have been reporting the rise of the new algorithmic world order as an outside observer. And the reporting has been thorough, veracious and enlightening — the stories told by the news media have had a concrete effect on how people perceive our continuously evolving digital realities.

However, as the information flows have moved into the algorithmic black boxes controlled by the internet giants, it has become obvious that it’s very difficult or close to impossible for an outside observer to understand the dynamics that affect how or why a certain piece of information becomes newsworthy and widely spread. For the mainstream news media, Trump’s rise to the presidency came as a “surprise,” and this is but one example of the new dynamics of today’s digital reality.

And here’s a paradox. As the information moves closer to us, to the mobile lock screen and other surfaces that are available and accessible for us all the time, its origins and background motives become more ambiguous than ever.

The current course won’t be changed by commenting on or criticizing the actions of the ruling algorithmic platforms.

The social media combined with self-realizing feedback loops utilizing the latest machine learning methods, simultaneously being vulnerable for malicious or unintended gaming, has led us to the world of “alternative facts” and fake news. In this era of automated troll-hordes and algorithmic manipulation, the ideals of news media sound vitally important and relevant: Distribution of truthful and relevant information; nurturing the freedom of speech; giving the voice to the unheard; widening and enriching people’s worldview; supporting democracy.

But, the driving values of news media won’t ever be fully realized in the algorithmic reality if the news media itself isn’t actively developing solutions that shape the algorithmic reality.

The current course won’t be changed by commenting on or criticizing the actions of the ruling algorithmic platforms. #ChangeFacebook is not on the table for news media. New AI-powered Google News is controlled and developed by Google, based on its company culture and values, and thus can’t be directly affected by the news organizations.

After the rise of the internet and today’s algorithmic rule, we are again on the verge of a significant paradigm shift. Machine learning-powered AI solutions will have an increasingly significant impact on our digital and physical realities. This is again a time to affect the power balance, to affect the direction of digital development and to change the way we think when we think about news — a time for news media to transform from an outside observer into a change maker.

AI solutions for news media

If the news media wants to affect how news content is created, developed, presented and delivered to us in the future, they need to take an active role in AI development. If news organizations want to understand the way data and information are constantly affected and manipulated in digital environments, they need to start embracing the possibilities of machine learning.

But how can news media ever compete with today’s AI leaders?

News organisations have one thing that Google, Facebook and other big internet players don’t yet have: news organizations own the content creation process and thus have a deep and detailed content understanding. By focusing on appropriate AI solutions, they can combine the data related to the content creation and content consumption in a unique and powerful way.

News organizations need to use AI to augment you and me. And they need to augment journalists and the newsroom. What does this mean?

Augment the user-citizen

Personalization has been around for a while, but has it ever been designed and developed in the terms of news media itself? The goal for news media is to combine great content and personalized user experience to build a seamless and meaningful news experience that is in line with journalistic principles and values.

For news, the upcoming real-time machine learning methods, such as online learning, offer new possibilities to understand the user’s preferences in their real-life context. These technologies provide new tools to break news and tell stories directly on your lock screen.

An intelligent notification system sending personalized news notifications could be used to optimize content and content distribution on the fly by understanding the impact of news content in real time on the lock screens of people’s mobile devices. The system could personalize the way the content is presented, whether serving voice, video, photos, augmented reality material or visualizations, based on users’ preferences and context.

Significantly, machine learning can be utilized to create new forms of interaction between people, journalists and the newsroom. Automatically moderated commenting is just one example already in use today. Think if it would be possible to build interactions directly on the lock screen that let the journalists better understand the way content is consumed, simultaneously capturing in real time the emotions conveyed by the story.

By opening up the algorithms and data usage through data visualizations and in-depth articles, the news media could create a new, truly human-centered form of personalization that lets the user know how personalization is done and how it’s used to affect the news experience.

And let’s stop blaming algorithms when it comes to filter bubbles. Algorithms can be used to diversify your news experience. By understanding what you see, it’s also possible to understand what you haven’t seen before. By turning some of the personalization logic upside down, news organizations could create a machine learning-powered recommendation engine that amplifies diversity.

Augment the journalist

In the domain of abstracting and contextualizing new information and unpredictable (news) events, human intelligence is still invincible.

The deep content understanding of journalists can be used to teach an AI-powered news assistant system that would become better over time by learning directly from the journalists using it, simultaneously taking into account the data that flows from the content consumption.

A smart news assistant could point out what kinds of content are connected implicitly and explicitly, for example based on their topic, tone of voice or other meta-data such as author or location. Such an intelligent news assistant could help the journalist understand their content even better by showing which previous content is related to the now-trending topic or breaking news. The stories could be anchored into a meaningful context faster and more accurately.

Innovation and digitalization doesn’t change the culture of news media if it’s not brought into the very core of the news business.

AI solutions could be used to help journalists gather and understand data and information faster and more thoroughly. An intelligent news assistant can remind the journalist if there’s something important that should be covered next week or coming holiday season, for example by recognizing trends in social media or search queries or highlighting patterns in historic coverage. Simultaneously, AI solutions will become increasingly essential for fact-checking and in detecting content manipulation, e.g. recognizing faked images and videos.

An automated content production system can create and annotate content automatically or semi-automatically, for example by creating draft versions based on an audio interview, that are then finished by human journalists. Such a system could be developed further to create news compilations from different content pieces and formats (text, audio, video, image, visualization, AR experiences and external annotations) or to create hyper-personalized atomized news content such as personalized notifications.

The news assistant also could recommend which article should be published next using an editorial push notification, simultaneously suggesting the best time for sending the push notification to the end users. And as a reminder, even though Google’s Duplex is quite a feat, natural language processing (NLP) is far from solved. Human and machine intelligence can be brought together in the very core of the content production and language understanding process. Augmenting the linguistic superpowers of journalists with AI solutions would empower NLP research and development in new ways.

Augment the newsroom

Innovation and digitalization doesn’t change the culture of news media if it’s not brought into the very core of the news business concretely in the daily practices of the newsroom and business development, such as audience understanding.

One could start thinking of the news organization as a system and platform that provides different personalized mini-products to different people and segments of people. Newsrooms could get deeper into relevant niche topics by utilizing automated or semi-automated content production. And the more topics covered and the deeper the reporting, the better the newsroom can produce personalized mini-products, such as personalized notifications or content compilations, to different people and segments.

In a world where it’s increasingly hard to distinguish a real thing from fake, building trust through self-reflection and transparency becomes more important than ever. AI solutions can be used to create tools and practices that enable the news organization and newsroom to understand its own activities and their effects more precisely than ever. At the same time, the same tools can be used to build trust by opening the newsroom and its activities to a wider audience.

Concretely, AI solutions could detect and analyze possible hidden biases in the reporting and storytelling. For example, are some groups of people over-presented in certain topics or materials? What has been the tone of voice or the angle related to challenging multi-faceted topics or widely covered news? Are most of the photos depicting people with a certain ethnic background? Are there important topics or voices that are not presented in the reporting at all? AI solutions also can be used to analyze and understand what kind of content works now and what has worked before, thus giving context-specific insights to create better content in the future.

AI solutions would help reflect the reporting and storytelling and their effects more thoroughly, also giving new tools for decision-making, e.g. to determine what should be covered and why.

Also, such data and information could be visualized to make the impact of reporting and content creation more tangible and accessible for the whole newsroom. Thus, the entire editorial and journalistic decision-making process can become more open and transparent, affecting the principles of news organizations from the daily routines to the wider strategical thinking and management.

Tomorrow’s news organizations will be part human and part machine. This transformation, augmenting human intelligence with machines, will be crucial for the future of news media. To maintain their integrity and trustworthiness, news organizations themselves need to able to define how their AI solutions are built and used. And the only way to fully realize this is for the news organizations to start building their own AI solutions. The sooner, the better — for us all.

Apple Updates Leadership Page to Include New AI Chief John Giannandrea

Apple today updated its Apple Leadership page to include John Giannandrea, who now serves as Apple’s Chief of Machine Learning and AI Strategy.

Apple hired Giannandrea back in April, stealing him away from Google where he ran the search and artificial intelligence unit.



Giannandrea is leading Apple’s AI and machine learning teams, reporting directly to Apple CEO Tim Cook. He has taken over leadership of Siri, which was previously overseen by software engineering chief Craig Federighi.

Apple told TechCrunch that it is combining its Core ML and Siri teams under Giannandrea. The structure of the two teams will remain intact, but both will now answer to Giannandrea.

Under his leadership, Apple will continue to build its AI/ML teams, says TechCrunch, focusing on general computation in the cloud alongside data-sensitive on-device computations.

Giannandrea spent eight years at Google before joining Apple, and before that, he founded Tellme Networks and Metaweb Technologies.

Apple’s hiring of Giannandrea in April came amid ongoing criticism of Siri, which many have claimed has serious shortcomings in comparison to AI offerings from companies like Microsoft, Amazon, and Google due to Apple’s focus on privacy.

Subscribe to the MacRumors YouTube channel for more videos.


In 2018, Apple is improving Siri through a new Siri Shortcuts feature that’s coming in iOS 12, which is designed to let users create multi-step tasks using both first and third-party apps that can be activated through Siri.

Discuss this article in our forums

Apple’s Shortcuts will flip the switch on Siri’s potential

At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and ”expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands, and will even be customizable with a dedicated Shortcuts app.

But since this new feature won’t let Siri interpret everything, many have been lamenting that Siri didn’t get much better — and is still lacking compared to Google Assistant or Amazon Echo.

But to ignore Shortcuts would be missing out on the bigger picture. Apple’s strengths have always been the device ecosystem and the apps that run on them.

With Shortcuts, both play a major role in how Siri will prove to be a truly useful assistant and not just a digital voice to talk to.

Your Apple devices just got better

For many, voice assistants are a nice-to-have, but not a need-to-have.

It’s undeniably convenient to get facts by speaking to the air, turning on the lights without lifting a finger, or triggering a timer or text message – but so far, studies have shown people don’t use much more than these on a regular basis.

People don’t often do more than that because the assistants aren’t really ready for complex tasks yet, and when your assistant is limited to tasks inside your home or commands spoken inton your phone, the drawbacks prevent you from going deep.

If you prefer Alexa, you get more devices, better reliability, and a breadth of skills, but there’s not a great phone or tablet experience you can use alongside your Echo. If you prefer to have Google’s Assistant everywhere, you must be all in on the Android and Home ecosystem to get the full experience too.

Plus, with either option, there are privacy concerns baked into how both work on a fundamental level – over the web.

In Apple’s ecosystem, you have Siri on iPhone, iPad, Apple Watch, AirPods, HomePod, CarPlay, and any Mac. Add in Shortcuts on each of those devices (except Mac, but they still have Automator) and suddenly you have a plethora of places to execute these all your commands entirely by voice.

Each accessory that Apple users own will get upgraded, giving Siri new ways to fulfill the 10 billion and counting requests people make each month (according to Craig Federighi’s statement on-stage at WWDC).

But even more important than all the places where you can use your assistant is how – with Shortcuts, Siri gets even better with each new app that people download. There’s the other key difference: the App Store.

Actions are the most important part of your apps

iOS has always had a vibrant community of developers who create powerful, top-notch applications that push the system to its limits and take advantage of the ever-increasing power these mobile devices have.

Shortcuts opens up those capabilities to Siri – every action you take in an app can be shared out with Siri, letting people interact right there inline or using only their voice, with the app running everything smoothly in the background.

Plus, the functional approach that Apple is taking with Siri creates new opportunities for developers provide utility to people instead of requiring their attention. The suggestions feature of Shortcuts rewards “acceleration”, showing the apps that provide the most time savings and use for the user more often.

This opens the door to more specialized types of apps that don’t necessarily have to grow a huge audience and serve them ads – if you can make something that helps people, Shortcuts can help them use your app more than ever before (and without as much effort). Developers can make a great experience for when people visit the app, but also focus on actually doing something useful too.

This isn’t a virtual assistant that lives in the cloud, but a digital helper that can pair up with the apps uniquely taking advantage of Apple’s hardware and software capabilities to truly improve your use of the device.

In the most groan-inducing way possible, “there’s an app for that” is back and more important than ever. Not only are apps the centerpiece of the Siri experience, but it’s their capabilities that extend Siri’s – the better the apps you have, the better Siri can be.

Control is at your fingertips

Importantly, Siri gets all of this Shortcuts power while keeping the control in each person’s hands.

All of the information provided to the system is securely passed along by individual apps – if something doesn’t look right, you can just delete the corresponding app and the information is gone.

Siri will make recommendations based on activities deemed relevant by the apps themselves as well, so over-active suggestions shouldn’t be common (unless you’re way too active in some apps, in which case they added Screen Time for you too).

Each of the voice commands is custom per user as well, so people can ignore their apps suggestions and set up the phrases to their own liking. This means nothing is already “taken” because somebody signed up for the skill first (unless you’ve already used it yourself, of course).

Also, Shortcuts don’t require the web to work – the voice triggers might not work, but the suggestions and Shortcuts app give you a place to use your assistant voicelessly. And importantly, Shortcuts can use the full power of the web when they need to.

This user-centric approach paired with the technical aspects of how Shortcuts works gives Apple’s assistant a leg up for any consumers who find privacy important. Essentially, Apple devices are only listening for “Hey Siri”, then the available Siri domains + your own custom trigger phrases.

Without exposing your information to the world or teaching a robot to understand everything, Apple gave Siri a slew of capabilities that in many ways can’t be matched. With Shortcuts, it’s the apps, the operating system, and the variety of hardware that will make Siri uniquely qualified come this fall.

Plus, the Shortcuts app will provide a deeper experience for those who want to chain together actions and customize their own shortcuts.

There’s lots more under the hood to experiment with, but this will allow anyone to tweak & prod their Siri commands until they have a small army of custom assistant tasks at the ready.

Hey Siri, let’s get started

Siri doesn’t know all, Can’t perform any task you bestow upon it, and won’t make somewhat uncanny phone calls on your behalf.

But instead of spending time conversing with a somewhat faked “artificial intelligence”, Shortcuts will help people use Siri as an actual digital assistant – a computer to help them get things done better than they might’ve otherwise.

With Siri’s new skills extendeding to each of your Apple products (except for Apple TV and the Mac, but maybe one day?), every new device you get and every new app you download can reveal another way to take advantage of what this technology can offer.

This broadening of Siri may take some time to get used to – it will be about finding the right place for it in your life.

As you go about your apps, you’ll start seeing and using suggestions. You’ll set up a few voice commands, then you’ll do something like kick off a truly useful shortcut from your Apple Watch without your phone connected and you’ll realize the potential.

This is a real digital assistant, your apps know how to work with it, and it’s already on many of your Apple devices. Now, it’s time to actually make use of it.