Keynote with G. Rajan, R. Monga, C. Thota, and T. Jordan (Ubiquity Dev Summit 2016)
Articles Blog

Keynote with G. Rajan, R. Monga, C. Thota, and T. Jordan (Ubiquity Dev Summit 2016)


[MUSIC PLAYING] [APPLAUSE] TIMOTHY JORDAN: Hi, everybody. Welcome to the Ubiquity
Dev Summit, day one. Before we get started, I
wanted to tell you a little bit about our code of conduct. We want everybody here to have
the best experience possible, and we value an
inclusive community. No matter your
background or experience, you’re welcome at the summit. And we encourage you, therefore,
to be excellent to each other and know that we have
a zero tolerance policy for any harassment of any kind. Our code of conduct is
posted around the venue and on the website. All right. And with that, I’d like to
get the conversation started with a colleague who couldn’t
be here in person today. [MUSIC PLAYING] VINT CERF: Hi. It’s Vint Cerf. I’m Google’s Chief
Internet Evangelist. And today, I want to rant
about the internet of things. You’re going to hear
a lot about this. And from me, you’re going
to hear several things. Specifically,
you’re going to hear that things need to interwork. Nobody wants a
million different hubs that they have to configure
for each different product from each different
manufacturer. We’re going to have to have some
international standards that will allow people to manage and
configure and control access to these devices. Which leads me to another point,
which has to do with security. It’s important that these
devices be updatable so that bugs and mistakes that
might leave them vulnerable can be corrected. But that also means that
when the device ingests a new load of
software, it has to be able to make sure that that
particular load is legitimate. So we need to
digitally sign things. We need strong authentication
so that a device that’s getting a new
software load knows where it came from and can
assume that it’s a safe load to install. By the same token, I’m very
concerned about privacy. And the reason for
this should be obvious. There’s a lot of
information that even sensors could collect,
like temperature sensors where you’re sampling
every five minutes. After a while– let’s say
after several months– you might be able to figure out
how many people live in a home, which rooms do they occupy,
what is their typical schedule for coming and going. But on the other side of that,
suppose the house is on fire, and the fire department
comes and wants to know where is everybody. Some of them may be unconscious. So if there’s a way to
tell who’s in which room, or at least that
someone is in a room, that might turn out
to be a lifesaving piece of information. So we really need to make
sure that as we design these devices that are part
of this internet of things, we pay close attention
to interoperability, strong authentication, privacy,
and security and safety so that the users
will have confidence that the devices they
buy from us and others will, in fact, be things
that they can rely upon. That’s a big burden on
everybody’s shoulders. But it’s a responsibility
and an obligation that we must undertake. Otherwise, no one will be
interested in buying and using the products that we
make and want to sell. So I hope you have
a great conference. I’m sorry I can’t
be there with you. But I hope I’ll
see you on the net. [APPLAUSE] TIMOTHY JORDAN: Thank you
to Vint Cerf for getting our conversation started. Now I’m going to continue
it with a little bit more detail in that context. Let’s start with this question. Have any of you thought
about what the living room of the future will look like? Lots of yeses. Give me some ideas. AUDIENCE: I’ll be
a couch potato. TIMOTHY JORDAN: I’ll
be a couch potato. What will the living room
of the future look like? Any other? Yes. AUDIENCE: Apps. TIMOTHY JORDAN: Apps. AUDIENCE: It will be automated. TIMOTHY JORDAN: Automated. AUDIENCE: [INAUDIBLE] TIMOTHY JORDAN:
It’ll be automated. It’ll know your favorites
and your preferences. These are all great ideas. And it’s interesting,
actually– when I usually ask this question, people
focus on the technology. And I think that’s
key for us to do. But here’s what our
colleagues at Nest came up with when
their CEO, Tony Fadell, asked them the same thing. I really love this picture,
and I’ll tell you why. It’s a little messy. It’s a little playful. But the most profound
thing is what it isn’t. There aren’t screens and
keyboards everywhere. You can hardly see
the technology at all. Instead, it fades
into the background, supporting the
users– this family– in many subtle and
effective ways. So when you think about
ubiquitous computing or the internet of things,
don’t think about technology getting in the way everywhere
or being overwhelming. Think of it blending in
and helping everywhere it makes sense and is
useful for your users. With that in mind, here’s
three pillars for us to start our journey together. And it’s been my experience that
the most successful companies that I’ve observed and
met in this space– well, these are the three things that
they’re most passionate about, the first of which
is interoperability. This is systems,
protocols, and IoT schemas that are based on open standards
and enable interoperability as well as modularity. The second is
security and privacy. And data control,
access control, and identity management
are fundamental, regardless of who made the
product or how users access it. And the last one
is human interface. This is all about
new paradigms for HCl that rely more on
natural metaphors. Let’s dive into a
little more detail on each of these three pillars–
first, interoperability. Historically, engineering teams
have built their own solutions to solve their
own problems, each creating their own
proprietary protocols and further isolating
their devices from the rest of the world. However, if you have
common IoT protocols, not only does this
mean less work for the device
manufacturers and app developers who no longer have to
keep solving the same problems over and over
again, it also means that these devices
can talk to each other and app developers can
integrate multiple devices into their feature set. Next is security and privacy. This is all about having
everything be secure and giving users the
right insight and options to control their
experience as appropriate. In a little more
detail, what that means is hardware and software
makers should provide security and privacy by default, building
in essential platform features, like over-the-air updates. We need to build user
trust at every turn so that they’re confident of
the security of their data. And we should provide
users with the controls to manage their digital
identity and data per sensor, per account,
per product, and per home. And finally, we
need to ask the user before sharing any data with
other devices or services. And of course, this
helps build user trust. The last pillar is
human interface. And this is all about new
paradigms for user interface and interaction design,
or you can think of it as a reliance on more very old
paradigms of natural interface. And a common pitfall here is
to build an app walled garden. Apps are important, and we’ll
have them for some time yet. But think about
building a platform. And part of that is
through your app. And part of that is that
users can access their devices in a variety of other ways,
including physical interfaces and other apps and services. Also, here’s some
design principles for a really great
user experience on IoT. First, the user experience
should be so simple that it feels effortless. In the long run, users
always prefer simplicity and reliability over features. And whatever user
problem you solve, make sure to focus
on that and never let anything get in
the way of that effort. A great example of
this is messaging. And you’ll see this on
a variety of platforms. I, of course, love the
experience on Android Wear. In particular, I love
just being able to see the message on my watch
and be able to respond to it with voice. It’s so simple. It just makes sense. The next one is contextual. Now, contextual is all
about being relevant. You want to take into
account the time, location, and activity the user is
involved in so you can provide them with the most useful
experience wherever and whenever it is. A great example of this is
you can imagine a shopping list, where the user
walks into the store, and the list just
comes up for them. And before they leave the store,
it reminds them of anything that they might have missed. In this case, you’re
providing information and an experience for the
user before they ask for it. And finally, it
should be immediate. The experience should be so
fast that it feels immediate. Being contextual
can help with this, but it’s also about
microinteractions. And remember, you’re not trying
to distract or occupy the user. You just want to present
the right information right away so they can
get what they need and get on with their life. A great example of this is
voice commands on Android TV. I can just tell Android TV
what show I want to watch, and it shows me
that in my library. Or auto play and
queuing with Google Cast so the next episode just
comes up right away as I’m watching through a series. Here’s the last
one– extensible. Now, I’ve mentioned
this a few times today in a bunch
of different ways, because it’s really important. You want to build your app
so that the experience is extensible. That means the user can get
to it wherever they’d like. And you want to build
that into your core app and overall experience
so that the user really has it at hand wherever
it makes sense for them. A great example
of this is music. Can you think of a
time in your life where you wouldn’t want
access to your music? Whether that’s on your phone,
going for a run, in your living room, or in your car, you
should be able to, as a user, have that music
experience in any of those contexts
in the way that it makes sense in that context. So these are our four principles
for UX design human interface that we can keep in
mind, keep in our pocket as we build in this new space. And with some more detail
about that– building in this new space– I’d like to
welcome to the stage Gayathri Rajan, Vice President of
Product at Brillo and Weave. GAYATHRI RAJAN:
Thank you, Timothy. [ALLPLAUSE] Hello, everyone. IoT is here, and it’s pervasive. These things will soon outnumber
phones, tablets, and laptops. Or to put it another
way, in just five years, there will be 26 of
these smart devices for every human on the planet. So that’s the good news. But there are
significant issues today that could hurt adoption of IoT. A survey conducted
on IoT executives highlighted that the top
barrier, the thing that they are most concerned about for IoT
adoption, is interoperability. There are simply too
many fragmented protocols and siloed ecosystems today. It’s also important to note
that security concerns are a fast follow. And if you ask
consumers, they’ll say they still don’t
see the promised value. But where there are problems,
we and you see opportunity. The opportunity that
lies ahead for IoT is to rethink those
user experiences in fundamental ways. First, reimagine the
products themselves. Look at ways we can make the
devices themselves smarter with the power of the net. And second, choreograph
these experiences across ensembles of
objects and services so that, in concert, we can
create true user benefit. But in order– I
forgot to do that. So as an example,
let’s take something as utilitarian as
a washing machine. If we were to add connectivity
and the power of the net, you could reimagine
a smarter version that is activated by voice
commands, that understands the user and its context,
that knows when it’s not operating efficiency. Now let’s amplify this further
by orchestrating experiences with other services and devices. The washing machine now
learns about the activity in the household
and its energy use, and can then determine when
to start the wash cycle, when it’s optimal for household
convenience and energy savings. It knows when to
order detergent, when it’s close to being exhausted. And it can do that
automatically. The possibilities
here are endless. I think this audience
will have ideas that take this a whole lot further. But to get to this
vision, we believe openness is an imperative. You’re heard this before, but
we think this bears repeating. These devices must be
able to communicate with each other across brands. And the data that they
generate must be comprehensible so developers can scalably build
intelligence services on top. Which brings me to
security and user privacy. In this interoperable ecosystem
with all these devices from different brands
connecting to each other, security becomes
increasingly critical and must be built into
the devices themselves and the device interactions. It should be really
easy for users to control who and what
can access a device and exactly what
they can do with it. User and device data
must be protected from unauthorized access, be
the data stored on the network, on the cloud, or on the
device itself– really, really important
that it’s protected. And finally, the
device itself must be made resistant to attacks. And in the event
of a compromise, it should be really easy
to fix it and bring it back to a trusted state. Which brings me to
Brillo and Weave. Brillo and Weave is Google’s
approach to building for IoT. We build security features into
the Brillo and Weave platforms so you can use it to build
secure devices and device interactions. And we’ve deliberately made
Weave an open protocol, so it’s easy to build apps and
services to securely set up your device, control them, and
choreograph those experiences across different devices. But ultimately, our vision for
Brillo and Weave– and this is really important–
is to enable our partners and the
larger developer community to build those great devices
and experiences that users will love. We provide the
foundational elements so you can create the magic. Just to give you
a little bit more of an overview of what Brillo
and Weave are for those of you new to Brillo– in
a nutshell, Brillo brings the simplicity and
speed of software development to hardware for IoT. What we give with Brillo
is an embedded, secure OS, core services, and
a developer toolkit so you can build
your IoT device. And because Brillo is based on
the lower levels of Android, it’s supported across
a range of hardware. So in addition to helping
you build these devices, with Brillo, you
have access to tools to help you monitor
and manage your devices over its lifetime–
really, really important in this new
world of connected devices. We can’t just ship and forget. In particular, we provide
a really easy way for you to update your
device over the air. Those of you who have been
building your own hardware know that this is hard to
do reliably and at scale. And we believe updates
are the only way you can ensure that the device
stays secure over its lifetime and can recover from attacks. So that’s Brillo in a nutshell. Weave is a communication
platform for IoT devices. It provides a seamless
and secure way for users to provision their
devices onto the network– something a lot
of users struggle with, because today, it’s pretty
inconsistent device to device– and controlled, low latency
access to devices, both locally as well as remotely. Weave is built into
Brillo, but you can also use the Weave libraries on
an existing device operating system that’s based on Linux. On the client side,
Weave APIs and the SDK are available on Android,
iOS, and the web, so you can build apps
for all these platforms. And because interop is such
a critical, critical element for IoT, what we’ve
done is for Weave, we’ve created these
devices schemas for major device categories. In this case, you see an example
for a camera and for a lock. This makes it easy
to build applications that can interact with
Weave-compatible devices, regardless of brand, and easy to
choreograph those interactions across device types as well. So that’s the basics
of Brillo and Weave. You’ll hear a lot more
details in upcoming talks today and tomorrow. Over time, we’ll be
adding a lot more in terms of services to Brillo to make it
easier for you to build smarts into your device. We’ll be adding some really
cool stuff– for example, voice actions, [INAUDIBLE]
contextual triggers so you can build some
of those experiences into your Weave-compatible app. So look forward to
hearing more from us. Essentially, with Brillo
and Weave, as I said, we’re laying the groundwork so
device makers and developers and you in the audience
can build the IoT devices and experiences of the future. This is a problem that we share. And we believe that
there’s a great opportunity to solve it together. Now, just to give you a quick
sense of the possibilities, I thought I’d show you a demo of
a Weave-enabled device– that’s a dog feeder there– and
a Weave-compatible app. So for context, Wayne is a
member of our developer team, and he’s an owner of two very
energetic and hungry dogs. He had bought this dog
feeder a few years ago with the intention
of getting it set up and connected to the internet. And the technology was
too hard at that time, and he’d given up. So it was sitting in his
garage gathering dust. He brought it in for
this demo and got it Weave-enabled in under a day. So let me welcome Wayne. And– oh, we have a surprise
guest, Monte, on stage. [APPLAUSE] So I’m going to show you
how easy it is for Wayne now to feed Monte when he’s–
well– when he’s at work. Can we switch to
the phone, please? So that’s his Weave-enabled app. And all he has to do is click
on this button and feed Monte. Well– [APPLAUSE] It’s easier to do the technology
than get adoption, isn’t it? [APPLAUSE] So that was a
really simple demo. To give you a flavor of what you
can do, I encourage all of you to use this opportunity over
the course of the next two days to learn more about Brillo and
Weave, work in the code labs, ask questions at office hours. You’ll have access to Brillo
and Weave very soon later today. So this will get
you an opportunity to get your hands dirty. We’re really excited
to see what you can build with Brillo and Weave. Thank you. [APPLAUSE] TIMOTHY JORDAN:
Thank you, Gayathri. And Wayne and Monte. Next, I’m very excited
to welcome to the stage Rajat Monga, Technical
Lead for TensorFlow. You can clap. It’s OK. RAJAT MONGA: Thank you, Timothy. Thanks. Hello, everyone. So I’m Rajat. I’m on the Google Brain team,
and I work on TensorFlow. I’m the lead for TensorFlow. Gayathri talked about
a lot of devices that are going to be
there in our future. She mentioned like 26 smart
devices for every human on the planet. Well, I’m here to
help make them smart, help you make them
smart, really. So let’s talk a little bit
about machine learning. Why is it so important? Why do we care about it? Why is it such a
big thing today? So let’s start by just
talking a little bit about what machine
learning really is and what it can do for you. One way to think
about it is, if you want to solve tasks that you
can’t really just program, that you can’t
really just program as a developer–
that’s what you want to do, to program everything
and be done with it– this is something that can
really do that for you. So for example, the
example I have here is you have a bunch of images,
and they’re labeled. If you wanted to
program something that could take those
images, or take any image, and say it’s a dog
or a cat or whatever, that’s going to be
really, really hard. But you can train
computers using what’s called machine
learning to really do that where you have data sets, which
are nicely-labeled images. You feed them to a computer
with the right algorithms, of course. And then eventually,
out pops a model that you can use to classify
a new image that it’s never seen before. And it’s used in a variety
of places to do that. So now let’s talk a
little bit about what are the different
things that you can use machine learning for. What is it being used for today? So one of the things–
just going back to the example I was doing–
is image recognition. You can, in this case,
give a picture of a lion, and it can say it’s a lion
with a very high confidence. And it does really,
really well here. In fact, on a data
set called ImageNet, which is very popular in the
research community, which has about a million images
and 1,000 different classes, the human accuracy is supposed
to be something like 95% or so. And these models are now able
to surpass human accuracy on that particular data set. It’s not that general. It can’t really see as well as
you do for every little thing. But at least in these kind
of images, it can do better. So it’s getting
really, really good. Another one is voice. A lot of you have
probably already tried talking to your phone, doing
searches, asking Google what’s going on. In this case,
somebody is asking, I want to see
Seattle restaurants. And it uses machine learning to
really take your voice command audio that you
have and convert it to a query, which is then fed
to the regular Google search engine, and give you the great
results that we are good for. Another is search. The web now has something
like 60 trillion pages, the last I checked. It’s probably a few
more trillion by now. Finding the relevant documents
from so many different pages is a hard task. Google has done it
really, really well. One of the things
that helps us there is machine learning, again. Given a particular
query that you make, sorting in sifting through
those trillions of pages and then coming
up with 10 answers that we can show on that
page, because that’s all you’re going to see and
maybe go a few more pages, is a really hard task. And machine learning is a
great way to help solve that. Another one is translation. So if you ever tried Google
Translate, where you just type a query– or you just type
something in, say, English, it can translate it to French
or the other way around. And in fact, it supports 90
different languages today. I don’t think we could do that
with even a million people sitting behind and translating
those things for you. So again, one of those
things that machine learning has made a huge difference. Another newer one– and
this one is more research so far– is taking
an image and actually trying to describe that. And in this particular
case, as you see, it’s able to describe a
closeup of a small child just as a human would. And it’s getting really
better in things like this. Think of the kind
of impact something like this can have if a
visually impaired person who can’t really see can
be described everything that’s going on around them. It will just be a
game changer for them. It will be a life
changer for them. So these are some of the
things that machine learning can do for you. Now let’s go a little bit
deeper into one specific branch of machine learning
called deep learning. And the reason I go through
this is all the things that I just talked about,
they are– actually, deep learning was applied. And deep learning
was the one that got all those great results for us. This is a very old
technology in the sense that the core ideas
come from the ’40s. It was called neural networks
or artificial neural networks back then. Over time, it’s evolved,
both algorithmically and with the additional
data being available and the computational power
that’s available to us. And it’s continued to improve. Especially over the
last five years, it’s starting to
show great results in some of the fields
you saw, and I’ll show you some more later on. The basic idea here
is you have some input of all kinds of things. It could be images, speech,
text, all kinds of things. You build these networks and
extract features out of them and are able to make some sense. And you’re finally able
to make some predictions of whatever you’re
interested in, whatever you can train it for. And they’re pretty
widely applicable. The same kind of algorithms
are applicable across a variety of different areas. It’s very loosely
inspired by neuroscience. Some people might say
that’s how the brain works, and we are trying to
replicate the brain. Not quite true. I wish it were, but
not quite there yet. It is definitely inspired
from there, though. This slide is just to give you
an idea of how important it is for us at Google. We started this
group back in 2011, and it’s been growing
rapidly since then. As this shows– this
is just a rough idea of how many different teams
are using it– but the number of different directories
that have models built off of this technology. And there are all kinds
of applications and areas that are using this technology. So that’s great. I have now given
you a background of what machine learning is. Now like me dive into
something called TensorFlow. It’s a machine learning
library that we just open sourced recently. And I’m going to talk a
little bit more about that. So let’s start by
what is TensorFlow. It’s a library that lets
you do all the things that I was talking about. It’s a library that allows you
to build these machine learning models and apply
them in production. And it powers hundreds
of different products at Google today. I listed some of the names
of areas that use that, but there are many, many more. It is something for researchers
to try out new things, to build new things,
but also, for developers to deploy these models into
production or applications or products, whatever you have. And it’s really important
to be able to do that. With all of this put together,
what it allows us to do is really to build
the next generation of intelligent applications. We have a lot of
different applications. We have a lot of products. How do we make them intelligent? How do we make them smart? That’s our goal. We want to help enable that. And we did open source it,
so it’s available to you now. We open sourced it a
couple of months ago. It’s on GitHub. And I’ll give you some pointers
at the end of the talk. But it’s there for you to try
out to do a lot of things with. So let’s talk a little bit
about what does it have, what is TensorFlow. At the core, like you
see in the middle, there’s the core execution
engine that’s written in C++ for performance. We definitely want to extract
as much performance as we can out of wherever it’s running. On the front end, we support
many different languages, depending on whatever
the developers want to build things in. In some cases, it might be
C++, because they want to have the absolute last
bit of performance. But in other cases,
it might be Python, because the data scientists
and researchers, that’s what they’re most comfortable with. And they want to really try
out many different things. They don’t want to have to
recompile things and so on. And the way we’ve
built the core is to enable supporting many,
many new languages as well. In fact, we’re working
with the community to add support for
many different things. People are interested
in Java, in JavaScript, in Scala, all kinds of things. And you’ll see support
for more of these building up over time as well. Now, one of the things that
the core execution engine does is it abstracts away whatever
the underlying devices are. So it actually runs on
all kinds of platforms. It can run on your Android
phones, on iOS phones, on all kinds of iOS devices,
and on your workstation, on your data center, et cetera. It’s pretty
platform-independent that way. So you don’t have to worry
about what platform you’re using or where you want to deploy. You can use it everywhere. The core was
built– our goal was to build it for
machine learning, but the core is very
general purpose. And in fact, people are
interested in using it for other things,
too, but I’m going to focus on the machine
learning aspects of it. And the basic idea being
that it makes it really, really flexible. So even if you want to
do machine learning, it allows you to expand and
try out all kinds of things that you would want to do. So that’s great. Seems like a great product. One question we always get
is, Google uses it so much. Why did we open source it? So as you saw earlier, this is
having a huge impact at Google. So many different
teams are using it. We thought this could have
a much bigger impact outside in the world, outside Google. And it’s really important for us
to enable the kind of research to accelerate machine
learning and to do even better than where we are. Yes, we are building
smarter things. We are building applications
that are learning a lot. But there’s a long way to go. There’s still a lot to be done. And we definitely want
to accelerate machine learning research. And that was one of the reasons
why we open sourced TensorFlow at this time. The other part that we
think is really important– and we’ve seen at Google by
having this one platform that goes from data scientists
to the developer– is that it speeds up
production deployment. Once you make
something, once you have something that’s
working, you really want to deploy it
as soon as you can. You don’t want to wait another
quarter for the developer to build something that can
be deployed in production. And this really helps with that. So digging a little bit more
into that, who is it for? It’s really for all
kinds of people. In this case, we talk about
researchers, developers, and the data scientists. And what the researchers
do, of course, is they want to
try out new ideas. And this provides the
flexibility for them to do that pretty
easily without having to worry about the underlying
frameworks, the performance, et cetera. It just comes for free. Now, the data scientists–
who are probably overlapping with researchers,
et cetera– can leverage the growing library of
models that we’ve built. And what we are doing is as we
publish papers on new research, we are also going to publish
real code that can be shared and collaborated with. Now, other data scientists can
leverage that, customize it to their needs, or train
it on their specific data and really gain a lot from that. Once that’s done– once they
have a model that works really well, that gives get
great results– then the developers could
take the same API and use it to deploy
it in production. So here are some
examples of applications that actually use it today– in
some cases on the phone itself, in some cases in the cloud. Google Photos for one–
for example, in this case, you can search by saying
a dog, and it will show you pictures of a dog. It organizes things by
different people, et cetera. And all of this uses machine
learning behind the scenes. Another one is Google
Translate, which uses a combination of cloud
and on application or on device things. So for example, in
the case on the left here, you can just point
the phone at some sign, and you’ll be able to just see
it translated in your language. In this case, it’s going
from French to English. And in this case, you want
that real time feeling. You don’t really want the
user to point the camera and then wait for two seconds
before it goes to the cloud, comes back, and
then displays this. It actually, in this case,
real time just takes the image and superimposes the
right text on it. And so in this case,
having it on device is really, really important, and
you can do that for TensorFlow. Another one is just
conversing hands free or talking to the phone, and
it can speak the other language for the other person as well. Another one is in Inbox,
we launched a new feature a couple of months ago. It’s called Smart
Reply, where if there is a small text, small email
that you got, and you have a very, very simple reply
that you could send, it’s going to give
you some suggestions. And you can just click
on the suggestion and take it if you want. It makes it really
easy to reply to emails from the phone when
all you want to say is, OK, I’m going to be
there, or in this case, say I’ll check on it,
or something like that. Another one is Google
Keyboard, where it predicts what you’re saying. For example, you might
swipe on the keyboard, and it’s going to try to
figure out what word you have and what word you’re
trying to say. And it can do that
using machine learning. So one of the things I
wanted to leave you with is– I showed you all these
different applications and different areas that we’ve
been applying machine learning. All of these applications
today use machine learning in one form or another. And this is not a small list. People say that the future
is going to be smarter, and in the future,
in a few years, we will have very
smart applications. I don’t think
that’s really true. Yes, they are going
to get smarter, but you already have
smart applications today. You already have
smart products today. And you really
want to think about how can you make your
application smarter when you build the next one. If you’re building
an app, if you’re building a cloud platform,
whatever you’re building, you really want to think
about machine learning to see how it can help you make
your next application smarter. Thank you. [APPLAUSE] TIMOTHY JORDAN: And we have
one more speaker for you today. I’d like to welcome to the
stage Chandu Thota, who works as an Engineering
Director for Geolocation and Local Efforts at Google. [APPLAUSE] CHANDU THOTA:
Thank you, Timothy. Good morning. So today, we are going
to talk about beacons. How many of you
have seen beacons? How many of you
have used beacons? Hopefully we can change
that after the session and after the conference. So beacons are not new. So they have existed
for a long time. And in fact, the platform
that we have built at Google called Eddystone is a
name that has been derived from a real lighthouse that
was built off of the UK some 300 years ago. Beacons have been critical, and
a fundamental infrastructure in providing context in
real life, whether you are sailing the seas–
that’s that they were used some 300 years ago. And even today,
there are a bunch of lighthouses that people use
when they are in rough seas. But there are also beacons
that people use when they are navigating on the roads. So if you are using any
GPS nav unit, most likely, you are using a bunch
of beacons in the sky. And these are what we
call GPS or GLONASS or any other system
that have been designed to provide this context. And the context is
not necessarily just about precise location. So even though most
of us actually think, whenever we think of
a GPS or any beacon, we think of that as a tool
to provide precise location. But the reality is that they
can provide a lot more context. Even in the case
of GPS, you get, obviously, a precise
lat/long, but also, you get the heading,
and the direction, and so on and so forth. So there’s a lot
more context that is provided by beacons to your
applications that you’re using. But here is a fact. We have all these navigational
aids and contextual aids that are present. And they have been present
for hundreds of years, and they are present in our life
today through various means. But the reality is most of
them are designed for outdoors. You take lighthouses or
GPS, it’s all for outdoors. None of that works indoors. And the reality is that 90% of
our wake up time, the active wake up time, we spend indoors. And for that, and also because
we use a lot of mobile devices with us all the time
when we are indoors, we need a better system
that can provide us that precise location
and context so we can build our apps that
can be smarter and useful. And when I say
precise location, you are actually thinking
of lat/longs probably, but that’s not actually true
when it comes to indoors. There is not a really meaningful
application to lat/longs, for example, in this
conference center. What you would
like to understand in terms of a precise location
when it comes to indoors is a semantic location. What it means is that I am
standing here on this stage. This is a semantic
location– the stage. And you are sitting
in that chair. That is a semantic location. You being able to
understand semantically where you are relative to
the physical space that’s surrounding you
is basically where we are going to use beacons
to provide that context. That leads us to many,
many applications that we are actually already
seeing today in the market, whether you’re in a shopping
mall standing next to an aisle, whether you are at the
airport trying to find out your next flight schedule or
trying to pull up your boarding pass, or whether
you are at a museum trying to understand what
the painting is about. All of these
applications are built on top of this understanding
of precise semantic location understanding indoors
using some type of beacons. And that’s where BLE
beacons come into picture. If you think about
the technology that powers all of
this, this is built on top of Bluetooth technology. Bluetooth [INAUDIBLE]
is the specification published by the
Bluetooth consortium, and this is an open standard. And the real promise
of this technology is that you don’t need
to pair with a device. And these devices– anything
that implements Bluetooth Low Energy protocol– are going
to consume very small amounts of power to operate. And these are
one-way transmitters. That’s the most important
thing, because there are lots of privacy concerns
about this technology. When it is a one-way
transmitter, the device itself is not doing anything. It is only emitting its ID. And occasionally it’s
saying, hey, I’m here. And that’s pretty much
what the beacon is saying. And then there is
a small payload that you can use in terms of
configuring these Bluetooth beacons. That’s pretty much it. So you have these
Bluetooth beacons, one-way transmitter, built
on top of a publicly open, defined standard, and
then they are configured to send messages one
way– from the beacon to your device on
a periodic basis. That’s all there is to it. And based on that, we
have built a platform to extend the capabilities of
these tiny devices, beacons, and make it easier for
you developers to build applications, both
on iOS and Android, to make them more contextual. So the way that we set
out to do it is– look, the beacons are
open, and the number of applications that
need to be built are going to be very varied. So why don’t we start
with a very open format? What it means is that
each beacon can have various types of frame formats. What it means is that a
beacon can emit its ID, or it can emit
its URL, or it can emit something else–
the 20 bytes of payload, remember, from the last slide. So basically, that’s
the payload that we are going to use, and
define a series of use cases together with the community. So that’s basically where
the open comes from. And we also wanted to
make it open source so you can help us extend
some of the frame formats, et cetera, which
we’ll talk about. And then we also wanted
to make it cross platform. So it not just
works for Android, but also, it works
efficiently on iOS. So these are the key tenets
of Eddystone beacon platform that we have built. So as I said, this
is open source. Open source means that we
have chosen to open source the frame formats, the SDK, the
samples, all of that on GitHub. So as you can see, there are
more than 165 commits to date. And we welcome all of
you to come work with us and extend the Beacon
platform on GitHub with us. And then the frame formats. Frame formats are the
fundamental elements of the entire platform. So if you stick a beacon,
you can configure that beacon to emit multiple frame formats. What it means is that you
can use the same hardware to operate in different modes. So one way is– let’s
say you take a beacon, and you stick it on the podium,
and say, hey, the podium I’m going to assign an ID. And semantically,
I’m going to define that this ID means this stage. And you are going to
use that to emit, let’s say, once every 10 seconds. And the same
beacon, you can also configure that to emit a URL
assigned to the same bacon once every 100 seconds,
and so on and so forth. So you can use the same hardware
and apply various frame formats so your application
listening to that beacon can make use of these
different frame formats in building different
experiences when mobile apps or users come near them. And we also working on a
series of new frame formats. So this is where, again, if you
guys have a specific use case or scenario that you
want to work with us, we welcome you to
come and contribute to some of these
frame formats that may be useful for the
rest of the community. And the last one, the key tenet
of the Eddystone platform– this has to work both
on Android and iOS. So we have nearby APIs. The SDK is available to
you, both on Android and iOS platforms. And by using these tools, it’s
fairly straightforward for you to implement your app. If you are programming
something on Android, you probably are going to use
exactly the same paradigms of nearby messages and
subscribing to these channels and responding to beacons. All these concepts
are going to be exactly the same
on both platforms, and that makes it really
easy for you to build apps. And how does it
all fit together? So there are three layers. There is the beacon hardware. As I described, these
are the tiny devices that come from many of our partners. And then there are mobile SDKs. So the cloud
service is where you are going to add some more
meaning to these beacons. What it means is that–
let’s say, as I said, if you stick a beacon
here on the stage, you can have that
ID have a definition of semantic location. So that definition can
exist in an app offline. Or you can store it
in the cloud and say, any app that sees this ID
has to hit this cloud service and get the meaning
of that beacon. And that’s where the Proximity
Beacon API service is going to be a hosted cloud
service for you guys to register your beacons and
attach a bunch of metadata, whether it is semantic
location or whether it is a precise location,
and so on and so forth, so your application
knows which beacon they are listening to and
then how to react to them. So the way that
you would actually proceed with the platform
is you take a beacon, and you provision it. Provisioning means that
you are going to define, preconfigure sets
of at what frequency you want to emit the ID frame,
at what frequency you want to emit the URL frame if you
have it, and at what frequency you want to emit the telemetry
frame, and so on and so forth. And once you have
defined that, then you will register with the
Google Cloud Service and say, hey, this beacon that
I’m going to stick here means the stage at the Strand. And once the
definition is assigned and once the
metadata is assigned, you are ready to
go with your app. You can take any of the
SDKs, Android and iOS, and build your application. And the way it works is the
data flow is pretty simple. The beacons are
one-way transmitters. They’re constantly emitting
their ID to your app the phone is hosting. And once your Android or iOS
phone that the app is running, and it’s listening to any
incoming beacon advertisements. And once this message
is heard, basically, the app going to look up
the Google cloud service and say, OK, what
does this beacon mean? And Google cloud
service will tell you, here is the metadata you
have attached to your beacon. You said this beacon
means the stage at Strand, or your
thermostat at your home, and so on and so forth. Once you have that
metadata in your app, you are going to make use of it
and do something with it that’s interesting to the users. So there are more
than 18 partners that we have lined up that can
provide these beacons for you. These are all
certified partners. Certified means that they
conform to the Eddystone standards in terms of
implementing all the frame formats. And they also work with our
SDK right out of the box. And some of the things
that, at Google, we are doing with this
technology– we are obviously wanting to
take advantage of this to the fullest extent. So we have integrated
this Beacon platform into Google Maps on Android. And some of the features that
we are just rolling out– we have launched a
pilot in Portland where you walk up
to any transit stop, and you get the real
time transit information in less than 500 milliseconds. So the entire thing was built on
top of the Eddystone platform. And this is on an Android–
Android Google Maps. There are more things
that we are trying to do. Obviously, with Google Now,
where you walk into a place and you get notifications. And some of you may
have already seen some of the notifications
that pop up at a place. So those are the things that
we’re working on as well. All of this brings us
back to one thing, which is, as Timothy has
alluded to as part of IoT, beacons are a great way
to provide that context, especially indoors. When you are indoors,
you are trying to provide the semantic
meaning of what you are interacting with or
what you are standing next to. As I said, it could
be an aisle in a shop, or it could be a
thermostat at your home, or it could be a point of sale
at your favorite coffee shop. All these things are
going to be immensely useful in terms of providing
that context into your app. So this is where
beacons come into play. And this is where you
guys are coming in to play in figuring out what
can be done to extend Eddystone into providing those
powerful contextual apps. With that, we have
plenty of other resources available at this conference. We have Ask Me Anything
session tomorrow. And then there is one more
developer-oriented session later on today. Obviously, we would
love to see you there. There will be members of
our team waiting for you to answer any questions. And also, I think
Timothy obviously didn’t want you to leave empty handed. So he’ll be here to tell
you a little bit more about what you got this morning. TIMOTHY JORDAN:
Thank you, Chandu. [APPLAUSE] That was a big box. So what’s in the box? Well, we didn’t want you
to leave empty handed. In addition to
the great sessions and technical
information that you’re going to hear
today and tomorrow, we wanted to make sure you
had the right tools to start building today. So we’re giving you a Brillo
and Weave Development Kit– [APPLAUSE] –a beacon– [APPLAUSE] –and of course, Cardboard. [APPLAUSE] You can’t leave a Google
event without Cardboard. Today, with these
tools and sessions, let’s start our journey together
beyond the internet of things. Thank you. [APPLAUSE] [MUSIC PLAYING]

9 thoughts on “Keynote with G. Rajan, R. Monga, C. Thota, and T. Jordan (Ubiquity Dev Summit 2016)

  1. Look guys, the problem with the dumb, non-smart-interoperable washing-machine is NOT the starting and stopping by pushing some pre-IoT-era buttons or the lack of fancy ordering-processes of detergents. It's the sorting of and filling with filthy clothes and the further handling of the cleaned ones, that is manual labour and the cost of it.

  2. I want to get started with Brillo and weave as an developer.' I am having knowledge of Android development. Can you please help me out getting started with Brillo and weave.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top