How Google Aims To Dominate Artificial Intelligence

The search giant is making its AI open source so anyone can use it

Last week, Google announced that it’s beginning to use machine learning in your email (if you use the Inbox app, which is separate from Gmail), and yes, it’s built on TensorFlow, according to Alex Gawley, product director for Gmail.

“We started to see some of the power of the neural nets our research team was building,” Gawley says. “That it might just be possible for us to help with more than just understanding and organizing. it might just be possible for us to help with things like writing mail.”

The feature is called Smart Reply, and basically one recurrent neural network reads your email and hands it off to a second, which generates three potential responses. You choose, and the email is sent. But email is just as sensitive as photos, if not more in some cases.

No person at Google reads your emails, which is important to keep in mind. However, data on which choice you made does get sent back to inform the global model. That’s how it learns. From that, researchers can ask the machine to answer certain questions, and from there understand what might need to be fixed in the neural networks. The software is same for everybody, too, which is something

Smart Reply also gives us a peek into how machine learning products are built within Google. The Inbox team deployed this feature internally, to test and feed the machine some ideas of what was right and wrong, a process called dogfooding. (The phrase comes from the idea of eating your own dog food, and is an example of why tech is bizarre.)

The whole team uses it, and documents bugs, and gives it more and more information to feed from. When the app behaves correctly in the controlled environment, and can be scaled, its released.

That’s the end goal in a smartphone with machine intelligence: the true digital personal assistant, ultimately predictive and vastly knowledgeable—the part of your brain you’re not born with.

Internal testing gives researchers a chance to predict potential bugs when the neural nets are exposed to mass quantities of data. For instance, at first Smart Reply wanted to tell everyone “I love you.” But that was just because on personal emails, “I love you” was a very common phrase, so the machine thought it was important.

All this is an attempt to make your work easier—that’s what a most of the company’s products are aiming to do, especially Google Now, the personal assistant of the Google world. The team’s catch phrase is “just the right information at the right time.” Aparna Chennapragada, the head of Google Now, says that machine intelligence needs to be thoughtfully considered when being built into the platform, in order to complement the human brain.

“You want to pick problems that are hard for humans, and easy for machines, not the other way around,” Chennapragada says. “It’s about making the technology do the heavy lifting for you, rather than doing it yourself. “

At the moment, the product is really just exploring how to use these methods to make your life easier. Chennapragada likens it to where research sat with voice recognition 5 years ago—it was okay, but didn’t work every single time.

They’re looking now at how to leverage three different kinds of data to serve you with tidbits of information. They see the phone as a “partial attention device” and an ideal service shouldn’t overload you with information.

“If you look at how each of us uses the phone, it’s between things that you’re doing in your life. It’s bite sized piece of information that you’re looking for,” Chennapragada says. “One of the things we think about is how we can work on your behalf, proactively, all the time.”

That’s the end goal in a smartphone with machine intelligence: the true digital personal assistant, ultimately predictive and vastly knowledgeable—the part of your brain you’re not born with.

So to get there, your phone needs data about you: your schedule, what you search for, what music you listen to, and where you go. This is the easiest kind of information to get, because it’s already on the device.

But when you combine that personal information with knowledge about the world, through Google’s KnowledgeGraph (more on this later), and data being sourced from other users, the world is brought to your fingertips. You might not know how to navigate an airport, but your phone does.

Another example of the way Google uses data from lots of people is gauging road traffic. By pulling anonymous location data from phones on the highway, Google can tell that cars are moving slower than usual. The same goes for being able to tell when a restaurant or coffee shop is busy.

Google Now represents the way Google approaches machine intelligence. They’re aware that a general intelligence model that can translate and tell you what’s in a picture is years and years away, so in the meantime, they’re creating a mosaic of tools that act in harmony to provide the best experience possible.

All Rights Reserved for Dave Gershgorn

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.