Uncategorized

Secure, private environments in the cloud & on-prem with Virtual Clouds (Google Cloud Next ‘17)


[MUSIC PLAYING] INES ENVID: Good
afternoon, everyone. It’s a great privilege
to have you here today. I’m so excited. We’re going to be sharing
with you a lot of new things. It’s a long title, so I’m not
going to have time to read it. But we’re going to
be talking mainly about how you deploy, secure
privately on simple deployments in hybrid environments. So my name is Ines Envid. I’m a product manager
in cloud networking. And I have with me
Neha Pattan, who’s a tech lead in cloud
networking as well, and Stefan Lindblom, who’s a
network engineer at Spotify. So Stefan is going to share
their experiences on onboarding and deploying and
putting in production their workloads with
our private networking. So with that, I think
we can get started. I want to focus first, the
first part of the talk, about how to simplify
some of the deployments in hybrid environments. This is something that
has been available, how to do VPCs in public cloud. How can you actually have
your own little piece of public cloud,
make it private, and spin your workloads
as part of that? It’s something that has been
available for a long time. But we want to take
a step back and see what is the current
experience when doing so. And what are the
things that we believe can be simplified and
improve to provide those type of
production environments? We will, as well, go
through our use case on demo, which Neha
will walk us through. And then, again, Stefan
is going to share some of their experiences when
deploying with Google Cloud. So with that, let’s talk
about virtual private cloud reimagined. The reason why I’m
saying reimagined is because we want to take a
look at what the virtual public clouds traditional look like. What are some of the
hurdles that they may have? And how we’re proposing
to overcome them with our architecture. We want to provide a way
to deploy efficiently, simple, flexibly. So how are we actually
proposing to do that? First of all, our Google
Virtual Private Cloud is global. I want to focus
on the attributes that we believe we can support
some of the simplification and efficiency that
I mentioned before. So first of all, our Virtual
Private Cloud is global. That means that you can deploy
your workloads globally. And when you do so, you can
have automatic communication, private communication
across them, across regions without
having to worry about putting in your own backbone,
having your own arrangements about the networking
for communicating those workloads across regions. And we will see that
that is true for cloud to cloud communication
in Google. But its also true
when you are extending the communication in a
hybrid environment with VPN or Cloud Interconnect,
but also having a global model in which a single
point of entry to our cloud give you access to all your
global deployed workloads. It is also shareable. And this is something that we
particularly feel proud of. Because what we
want to propose is a model in which you have a
VPC for your organization. And this has been
recently launched in beta. Actually, as of yesterday. And it is a single share VPC
for your organization, which means that you can
have developers and applications, workloads,
that are isolated in projects. I’ll talk a little bit more
about where the project is. But it provides a
multi-tenancy environment with isolation of
those workloads while they share a common
space for IP private networking in which they collaborate. And then you can have a
central administration for that space in terms
of security, IP location, setting up routing, et cetera. The third attribute
is our Google Virtual VPC is expandable. What that means is
that you can actually increase the IP ranges of
your subnets in a lifetime. You don’t need to shut
down any of your workloads. We complete the operation
without any service downtime. Therefore, you have
the flexibility, as you grow your business
and your applications, to expand that range without
having to worry upfront of provisioning that IP space. And then the last
part is private. We provide access– and
again, this is something that we’re introducing
recently and, again, we’re announcing in beta. We’re providing private
access to all our Google managed services, that
is all our analytics, Big Data, BigQuery,
storage, PubSub, et cetera, privately from our VM instances. So let’s take a little bit more. Let’s take a look at some of
these attributes, what they are in a little bit more depth. So our global VPC means that
you can connect your workloads, again, privately. And then in a traditional
VPC, usually when you want to connect
workloads across regions, you need to somehow have some
means of private connectivity and that could be through
a VPN or some type of stitching mechanism across
those two different regions. With our Google
virtual VPC, we are providing private communication
of those workloads without really having to
prove anything at all. So that’s presented to
you as a capability. And it doesn’t even
need to be provisioned. It’s complete
transparent to the user. So in terms of what that
means in the bigger picture, you’re looking at the
locations that we have today. We’re making a
lot of investments in expanding geographically. So this picture
gives you a snapshot of what we have today
in terms of regions. Those are the square boxes. And within those,
you see the number of zones, availability
zones that are there. In blue, you have what
are the regions that are going to be turned up
by the end of the year. And then I’m showing, as well,
the points of interconnect. Michael [INAUDIBLE]
in the morning was talking about
how we’re providing Cloud Interconnect private. And those will be the points
in which for this year, you will be able to
interconnect to get access to our network and our regions
where Compute and Storage handle the resources left. Let’s take a look
at the shared VPC. You will see it referred as
XPN when you go to our website. It’s short for
Cross-Project Networking. But I think it’s best
understood as a share virtual private cloud for
your organization. What that means is
in a traditional VPC, you usually have disjoint
private connectivity across different accounts. As well, if you have
different accounts, you may need to have different
private connectivity. Because if you have
different VPCs, one private connectivity
for each of those VPCs for your
on-premises data center, it’s a decentralized model
for setting up policies. And also, the developers need
to connect to that local VPC, so to say. And so they need to be aware of
what the connectivity is there. What we’re proposing
is a model in which you have a single virtual
private connectivity for your organization. And the organization is the
domain, just to be clear. So it maps to your
enterprise.com. That’s where all the
projects are going to live. And with that
shared connectivity, what you have as well is a
single private interconnect to your on-premises
will be made available as well to all the
accounts, projects that are connected to that shared VPC. And then it is a
centralized model for setting up
security and routing across your organization. And the developers will see
that [INAUDIBLE] plug and play or as a service. And they don’t need to be
aware of the routing mechanisms or the connectivity
policy associated to that. They’ll just see it as a domain
to which they connect when they spin up their services. So let’s take a look at what
it means really in terms of what do you need to set up. So there will be an
organization MY-ORG.com. There’s going to be a security
or a network administrator. This can be actually
two personas. You can have those
roles separated. But they’re going
to be responsible of the administration of that
share connectivity domain. That is the private
connectivity domain. They are going to be
responsible of the AP allocation for the organization. And then they’re going to
be setting up firewalls for that shared VPC. And you can actually connect
that single share VPC to your customer data center. And it’s made available
to the organization. Now, when a web server developer
starts its own project, it will just spin it up. It will directly
connect to that subnet. And then it will get access
as well to on-premises, the same for a new– this
is just imaginary services. You have a
recommendation service that is used by the web server. You spin it up. You’re able to communicate
in the same space. And the same for
the database DevOps. And the key is all those
different applications are owned by
different people that are owning their own projects. So the administration
and ownership of those is very clear. There’s a demarcation domain. So let’s take an example. And this goes back to what
we were showing before. A web server that has
a couple of services for recommendation, analytics. And then we’ll
fill in information into a cloud BigQuery as
analytics, Big Data example. And then there is an application
that is living in on-premises. And then those services, they
need to communicate with it. Let’s say, I know, an LDAP
directory or anything that it hasn’t been able
to move to cloud or is difficult to re-platform,
it’s not worth it, et cetera. So usually in a traditional VPC,
you have different projects. So products in
Google Cloud, they are containers in which
you apply policies like quota, billing. So it is really a
demarcation mechanism for [INAUDIBLE] as well. So it is a demarcation
mechanism for ownership on a lot of these
policies out in force. So each of these owners will
spin up their own services independently. And they will have to
create their own VPCs and then set up
security firewalls. If they need to
collaborate, they should set up some type of
internal connection mechanism, VPC peering, or
something similar, and then set up connectivity
for on-premises. And if you want to talk
to external services, then you will have public
IPs in order to do so. Now let’s take a step back. What we’re trying to do, we want
these services to collaborate. We want all of
them to get access to some application
on on-premises. And we want to get access to a
Google managed service, which is also hosted in
our infrastructure. So how can we do this better? We can have these services
create and then spin up their own projects,
so that we really can keep separate billing. We can enforce quotas for them. They can be owners of
any of the resources that are spin up there. But when it comes
to the connectivity, you can have a shared VPC that
is set up across all of them. And then set up
the firewalls that will determine how they will
talk externally and internally. And set up a single
private connectivity to your on-premises. And if you’re talking with
a Google managed service, that communication is private. So you don’t need public IPs
in order to achieve that. With that, I want to welcome you
to our vision of Google Virtual Private Cloud for global,
simple, flexible hybrid deployments. I welcome Neha
Pattan to the stage. And she’s going to
show us some demos. [APPLAUSE] NEHA PATTAN: Thanks, Ines. I’m Neha. I’m a tech leader at Google. I’m really excited to
be here with you today to talk about some of
the work that we’ve been doing to improve the
functionality of Google’s global virtual networks. I’m going to walk you
through a use case, which is sort of a real life
use case, and demonstrate how using Google’s VPC is
really simple, very intuitive, and pretty efficient. So let’s start with a
fictitious company, Dance Moves, Incorporated, of which you
are a principal engineer. Let’s imagine that the
CTO of your company comes up with an
interesting task for you. And your task is to design
and build a web application server that serves dancing
lessons to subscribers around the world. You’re handed a
list of requirements to make sure that the
service that you build is highly available and
can scale seamlessly to growth in usage and demand. For ease of development,
deployment, and speed of execution, you
would like to make sure that the service that you
build can run in the cloud and it is backed by
virtual machines. Now, let’s imagine
that your company is running an Active Directory
server in a data center on-premises. And in order to connect
the virtual network running in the cloud to this
Active Directory server, you would like to have
a secure connection between your virtual
network and the on-prem. Now, as you finish
coding your application and you begin to deploy
it, you anticipate traffic from certain parts
of North America. In order to serve
their traffic, you create two VPCs, one in US
West and the other in US East. And you begin to
deploy your application to virtual machines that are
provisioned at these VPCs. You create a VPN connection
to securely connect to your on-prem. Now, in a traditional
VPC model, you would then connect
your regional VPCs to each other through VPN
or through VPC peering or through public IPs. And the reason for
this is that there is no private connectivity
between regional VPCs, in a traditional
model, by default. Since Google’s VPC
is global in nature, all the VMs that you create
in your virtual network in any region globally
get private connectivity to each other by default,
and also get connectivity to your on-prem
through a single VPN. Although for throughput
and performance reasons, you may create one
VPN in each region, Google’s VPC allows you to
create multiple VPNs attached to the same virtual network. With Google VPC, which runs on
Google’s own backbone network infrastructure, which
serves our Google services, you are ensured that you
get high quality of service and low congestion. And you are ensured
that you do not need to create VPN
connections or use public IPs or use VPC peering in
order to get subnetworks in multiple regions globally
to have private connectivity to each other. Now, as your application
grows and as the users of your application grows,
you begin to notice traffic from other regions as well. So let’s imagine that you
begin to notice that you’re seeing traffic from Europe. How do you deal with this? Now, in a traditional VPC
model, the overhead and the cost associated with your
deployment increases with every new region that must
be added to your deployment. Google VPC, being
global in nature, you’re able to scale your
application seamlessly, horizontally by adding new
regions with minimal overhead. Now, let’s imagine that you had
provisioned your subnetworks to be slash 26’s. Now, a slash 26
gives you 64 IPs. GCE will reserve
four of those IPs. So let’s imagine that you had
provisioned your subnetworks to be able to create 60 VMs
in each of these subnetworks. Now, as your application
grows and you begin to notice more
traffic in each region, you begin to notice that you
are running out of IP space. This is not a problem. With Google VPC, you can expand
the subnet ranges seamlessly without affecting any
of the traffic that is served by the existing VMs
and also without affecting any of the IPs that are
owned by the existing VMs. So you get to change
the ranges associated with your regional networks
with zero downtime whatsoever. Let’s take a quick
look at a demonstration of how to create global
virtual networks and how to– can we switch to a demo, please? So here you can see that in
my Dance Moves web application server, I have a single
network right now, which is a production network. Let’s say I would like
to create a new network. I can give it a name. Let’s say, the test network. And I can give it a description. Now, when creating networks,
you are given two options. One is that you can
create an auto mode network in which one subnetwork
is created automatically for you of a fixed size. And the size is a slash 20. So every region will get
a slash 20 by default. When GCP adds new
regions, then we will make sure that
a new subnetwork is added in every auto
mode network by default. The other mode is
custom mode, where you can create as
many subnetworks as you desire in
the network and you can create them with any
size that you desire as well. So let’s say that I create a
test subnetwork in US West 1. So here I can choose a region
and give it an IP range. And that’s pretty much it. That’s all it took for me
to set up a network with one subnetwork in a single region. Now, let’s imagine that in
the production network, as you can see, I have four subnets. They’re all slash 16’s except
for what I have as a share subnet 4, which is a slash 24. So if I would like to
expand the subnetwork, I just go ahead and edit it. And I can change the
[INAUDIBLE] range to slash 16. And that’s pretty much it. So basically, with a few clicks,
I was able to create a network. And I was also able to expand
the IP [INAUDIBLE] range of a single subnetwork
in my network. So here you can see the
test network is created and the subnetwork in US West
1, the 10.40 is now increased to as slash 16. Can we switch back
to the slides? Thank you. Now, as your application
becomes more popular, you begin to see growth
not only in usage, but also in the demand for the features
that your service supports. With Google shared VPC,
which Ines described, we also call that XPN, short
for Cross-Project Networking, you are now able to
leverage the services that are built in-house by other
teams in your organization with minimal overhead. You’re also able to adopt
Google managed services, like the machine learning
services, our data warehousing services, and our storage
APIs, like BigQuery and MySQL. You’re able to access
these privately through private connectivity,
so that you do not have to assign public
IPs to your VMs in order to access
these Google services. So let’s imagine that to the
web application server that you built and deployed
previously, you would like to enhance
that by adding the functionality
of recommendation, personalization, and
analytics, which are services that are built by
other development teams in your organization. Now, with shared
VPC, also called XPN, you will now be able to enable
multiple development teams in your organization to work
and cooperate and collaborate effectively and collaborate
in an autonomous manner by providing a single
[INAUDIBLE] shared virtual network. So you can see the
recommendation service, the personalization service, and
the analytics services are all service projects. And the development teams
from these service projects can create virtual machines and
provision their applications on those virtual
machines without having to worry about network
administration, which is done centrally by the
network administrator associated with the shared VPC network. The network administrator can
also handle VPN connections to on-prem, so that every
development team does not have to create a
separate VPN connection. And since the
developers have autonomy in managing their
services, they can control the IM rules that are
configured on their projects. So they can configure
fine-grained IM. And they also have control
over billing and quota and running their services. The centralized
security administrator can configure security
policies in your organization by creating firewall rules
to control which subnetworks have access to the internet. So here you can see the web
application server has access to the internet versus
which subnetworks have only private connectivity. So they can only be
accessed over RFC 1980 IPs. And IM can also be used
to control and to restrict the permissions for attaching
public IPs to virtual machines by specifying the IM permissions
on the subnetworks that are created that are
used for creating the virtual machines in them. So how would we configure
this for the application that we have built? Now, Google VPC allows you to
create multiple subnetworks per region. So here we are considering
a single region, US West, where we have created
four subnetworks of size slash 16 each. So you can see the 10.10
slash 16 subnetwork has access to the internet. It is the internet-facing
subnetwork, which will be responding
to requests from users. And then you have the
other three subnetworks, which have private connectivity
and are accessed only internally. With shared VPC or XPN, you
can configure every subnetwork to be accessible by one
or more service projects to create virtual machines
attached to those subnetworks. So here you can see how the
10.20 slash 16 is configured to have the recommendation
project access that subnetwork to attach VMs in it. And similarly, for the
personalization and analytics projects, they can access the
10.30 slash 16 and the 10.40 slash 16 subnetworks. Now, the development
teams in these service projects– the recommendation,
personalization, and the analytics projects–
can provision VMs and deploy their applications
without having to worry about the IP
range management or the VPN connection management,
which is done by the network administrator, or having to
worry about the firewalls by the security administrator. Similarly, if certain
service projects which require private
access to Google APIs, then the network administrator
can configure this on the subnetwork that
is used by the service. So in this case, the network
administrator would consider the 10.20 slash 16 subnetwork to
have private access to Google. Let’s take a look at how easy
it is to set up XPN and enable private access to Google. Can we switch to
demo mode, please? So here you can see I’ve
selected the Dance Moves web application server. And I’m about to set up XPN. This asks me
whether I would like to ensure that the XPN host
project is the Dance Moves web app server. And I will confirm that. Now, we’re given two options. One is that you
can either choose to share all the subnetworks
in the chosen host project or you can choose
individual subnetworks to share with the
service projects. So let’s choose to share
individual subnetworks. So since the 10.20 slash 16 is
shared with the recommendation project, I’m going to go
ahead and configure that. So here you can see that
the recommendation project has no access to the
production network. And it basically has access
to the shared subnet2, which is of IP range
10.20 slash 16. And the project
it is shared with is the recommendation project. So it’s as simple as that. Now we can go back to
the Attach Projects and attach more projects. So let’s say the personalization
project gets the 10.30 slash 16 subnet. And sure enough, the analytics
service will get the 10.40 slash 16. And that’s pretty much it. So at this point, I
have three subnetworks that I’ve shared with
one service project each. And I can view this here. And I can basically look at all
the IP ranges that I’ve shared. And also look at
the fine-grained IM that I can then modify in
order to suit the correct IM requirements in my organization. So next, let’s see how to
enable private IP access. So as we saw, we would like to
configure the recommendation project to have
access to Google. And that would be
pretty much as simple as toggling this, where
I edit the subnetwork and I configure it to have
private Google access. And that’s pretty much it. So basically, with
a single click, I was able to configure private
access to Google services. Can we switch back
to the slides? Thank you. Now, as your
application grows, you will find the need to have
the virtual machines that are serving your web application
traffic to be serving traffic behind the load balancer. And you can use Google’s
global HTTP/HTTPS load balancing for this purpose. With shared VPC, you
can now configure these virtual machines to
have private connectivity to the recommendation,
personalization, and analytics services, where
the virtual machines serving traffic in those services
are serving traffic behind an internal
load balancer. We recently announced
the general availability of internal load balancing. And are now happy to
announce that this can also be configured in a
shared VPC model. So to recap what we saw,
Google VPC is global in nature. You can enable your workloads
in any Google Cloud region and get private connectivity
between all the cloud regions without having to take
any further action. Google’s global network is
provided to you as a service. You also get [INAUDIBLE] access
to Google managed services. And this is a very
rich feature set, as you might have seen
during the keynote and during the other
breakout sessions. The machine learning
APIs, we have [INAUDIBLE] warehousing APIs,
and we have our pub serve API, we have storage APIs, and
we have a lot many more coming up as well. The second point is that Google
VPC is shareable in nature. You can now enable your
organization to work in a way that you have multiple projects
and multiple teams collaborate together by having the
development teams focus on what they do best, which is
to develop the business logic for their applications
without having to worry about network and security
administration, which is done centrally by the
network and security administrators of
your organization. And the third point is that
Google VPC is expandable. Learning is great, but you
shouldn’t always have to. If you notice that your
application is growing and you did not plan
for this growth upfront, then you can actually
expand [INAUDIBLE] ranges that are associated with
your virtual network, without having any
downtime, without affecting any of the other
networking resources in your virtual network. This is a list of
some of the talks that you will find interesting
that also focus on Google Cloud networking. If you might have missed
some of these talks, then the videos will
be available shortly. And please make
sure to catch those. With that, I would like to
invite Stefan from Spotify. And he’s going to share some
of Spotify’s experiences with integrating with GCP. Thanks, everyone. [APPLAUSE] STEFAN LINDBLOM: OK. So hello, everyone. My name is Stefan. I’m a network
engineer at Spotify. I’m working in a
group of six people. And based in
Stockholm in Sweden. And I’ve been there
since late 2011, so that’s about
five and 1/2 years. And when I started, we were
around 300 employees globally. And we’re now over 2,000. So we’ve grown a
lot very quickly. And the same goes for the
infrastructure part as well. So I’m going to talk about how
we’re actually moving into GCP and how we’re combining
on-premise and the cloud. So first of all,
what is Spotify? Spotify is a music
streaming service that allows all users to
play their favorite music and build playlists and share
them with family or friends. And one big part of this
is discovering new music. Yeah, so some numbers
of the service. We have around 100 million
subscribers or users. Sorry. And about 50
million are actually paying for the service. Around 40 million songs and
over two billion playlists. And we’re available
in 60 markets today. So how is Spotify actually
built or the infrastructure behind it? So we have four data
centers, where it’s actually co-location facilities. So we don’t own the
actual space ourselves. One in London, one in Stockholm,
and then San Jose, and Ashburn. So spread out over this, we
have around 12,000 servers, which is just normal
commodity hardware. That the network
team, six people, we’re responsible for delivering
internet access and access across the data
centers themselves and then connectivity into GCP. So how we built this is we
have lots of small squads at Spotify, which are operating
over 600 microservices. And each squad may have five
to 10 services that they own. And they’re operational
responsible for those as well. So we don’t have a big
operational [INAUDIBLE] take on call, for example, for
all of the small services. And some of them is the
search system, playlist, or the user login system. But we also have some
pretty large services, like the Hadoop
cluster, which is one of the biggest in Europe. And we have some big
storage clusters as well that span our data
centers, which are relying pretty
heavily on our site to site connectivity,
which is something I will get back to you about. So why are we
moving to the cloud? So back when we
started in 2006, we had the production servers
in the closet in the office, pretty much. And it was pretty easy to
just go and buy more hardware, and get a rack or
two and then 40, and just drive
the servers there. But scaling your own hardware
and keeping up and having people to manage
all of the hardware takes, yeah, both
people and time. And our core business is not
really to operate data centers, but it’s to actually
build the best music service that’s out there. So that’s what we
want to focus on. And this is a quote from
my colleague Jyrki– “If you can’t beat
them, join them.” What it means is that Spotify
could probably never outrun Google in terms of
physical hardware, so let’s join them instead
and use their hardware. And why Google Cloud? So Google has a
lot of experience with Hadoop and
MapReduce, for example. And that’s something that
Spotify is using a lot, or the big data processing. So we can benefit a lot from
Google’s experience of that. For example, BigQuery
jobs are actually running in a few
seconds compared to ours in our on-premise, which
is a big improvement. And when we started
working with Google, we noticed a good cultural
much, like open source, and we’re able to innovate
on new projects and so on. And we feel like Google
is listening to our ideas and, yeah, all of that. And we actually
noticed a lower latency compared to serving users
from our own data centers since we moved into GCP. And that’s pretty
important to us, because every
millisecond matters from when you press Play till
you get the music playing. And some of the benefits of
having a shared VPC or XPN and how Spotify is using it. So each microservice at Spotify
can get its own project today in Google Cloud. And before we had XPN, we had
to have separate networking and separate firewall
policies, for example, for all of the projects. And that’s a big task
to maintain, especially for a single network team. So today we can give each
microservice its own project and they can all connect back
to the shared network that we have, which provides
VPN tunneled access back to our on-premise and security
policies or firewall policies. So the developers of a
project or a microservice, they can manage pretty much
all the settings on their own. But they can’t,
for example, change the whole global networking
settings or the security policies. So yeah, everything can’t
go down all of a sudden. And right now, we have more
than 250 projects in GCP. And more than 800 developers
are working actively on the GCP platform. And let’s get technical. So the Spotify
site-to-site VPN, this is something that we’ve
developed over the years. And it’s what we use for
communicating between our data centers or on-premise. So in each corner,
you can see we have one of the data
centers represented. And between all of
this, so we just buy internet transit
for each data center. And then we put a few VPN
gateways in each site. And we have lots of IPSec
tunnels going between them. So how we do that is the
total amount of tunnels is actually 192 IPSec tunnels. And on top of each tunnel,
there’s a BGP session. So this gives us quite a lot of
ECMP paths or load balancing. So if we were to look at, for
example London to Stockholm, you would see around 42 paths
that you can use at that time. And the reason for having
this amount of tunnels is that we can’t really control
the internet and congestion. So if we split all the
traffic up into smaller flows, we have a greater chance of
not hitting any bottlenecks on the internet. So we have a few years of
experience with a pretty large IPSec setup. And that’s something
if we wanted to keep as we’re
migrating to GCP. So connecting to GCP from
our on-premise data centers. First of all, IPSec over
the internet, that’s how we started when we connected
to Google in the first place. And that’s fine for evaluating
and proof of concept. But yeah, we also noticed
that in the beginning, the IPSec performance
was not what we were used to between
our own data centers, because we’re talking
about hundreds and hundreds of servers
talking to each other. But that improved
very rapidly in GCP. And today we’re just treating
it as another data center, pretty much. And yeah, in the beginning,
we had static routing between our data
centers and GCP. But as the cloud router
has been coming out, we actually have
dynamic routing. And this is BGP right now. So we have a pretty big
IPSec and BGP setup into GCP. And as bandwidth requirements
grew between us and GCP, as we moved more
and more services, we started doing the
Google Cloud Interconnect. So we’re connecting in multiple
common locations with Google where you typically have peering
with any other vendor as well. So Spotify VPN to
GCP via peering. So let’s start from
the left-hand side here on the diagram. We have our Spotify
office, for example. We connect all of the offices
back to our data centers with IPSec and BGP as well. So imagine that you have some
traffic going from the data center into Google
Cloud Storage, this would just go over the
peering links that we have. But it wouldn’t go over IPSec. It would be encrypted with HTTPS
or TLS on the application layer instead. But if we have a server
in the data center talking to a Compute Engine
VM in the cloud, then they could actually be
encrypted with IPSec instead. So how that’s working is it’s
going over our VPN gateways that we already had in place. And it’s going over the
interconnect, over the peering, with Google. And then we have one VPN
gateway per region in GCP. And right now, we
have four regions. And then we have a
cloud router in each of these regions as well. So we pretty much
replicated the setup that we already had for
our own data centers. So we have a full mesh between
our data centers and GCP. And it’s all going over
peering at this point. So as you can see, we
have a different subnet for each region in GCP. And we are able to advertise
all of our prefixes from every single rack into GCP. So how we integrate this
into our already existing provisioning tool is that we– I mean, we had a provisioning
tool, which allowed us to abstract, for example,
the DNS generation for all the servers. So for the developer, they used
to have to go into this tool and request a
server or multiple. And they can simply pick a
platform between bare metal or Google Cloud. And that will take
care of creating the instances or the VMs, and
also associated to the XPN or master project to give
them networking access and VPN back to the on-premise
and share all the security policies and so on. So this is how that tool looks. It’s called System Zed, or
System Z. And as you can see, you can pick Google
Cloud or Bare Metal. And eventually, the Bare
Metal option will go away. And once you press
that, or Google Cloud, you get some more options,
like what role it is. And that’s the internal
microservice, for example. And if you know
the selectware it’s supposed to be deployed in,
what region, physically, and how much CPU and
RAM that you need. So how do we feel so
far we’ve gone to GCP? We’ve had a great
experience so far. And from a network
engineer’s perspective, it’s been very stable. The IPSec has been great. And no matter if
it’s IPSec or not, the peering has been
very stable as well. And the thing is it’s not
only about migrating VM for VM or going to creating
a container. It’s also about us using more
and more of the platform tools. And yeah, it’s a big step. And we sometimes notice
in the networking team that people are experimenting
with features that we’re not used to from the network side. For example, the Google
Container Engine or Kubernetes. And that’s a whole
another thing for us and something that we need
to learn how to handle. But luckily, we have
the Google support. And that’s something that we’re
relying pretty heavily on. And it’s a big mindset change. Because before, any
developer could pretty much go up to any person face to face
and ask how a system is working or how the hardware provisioning
is done, for example, but now we’re relying
on one support portal. So just one more thing. This was added today to the
slide, so it’s brand new. So as we got XPN and we started
putting all the microservices in GCP, which was ramped up
pretty heavily in December last year, we were actually
able to move all of our users away from our San Jose
data center yesterday. So in this graph, you can
see that the green line, that’s GCP US Central. The red line is
the GCP Asia East. And the blue line is our
own on-premise San Jose data center. So that means that 28% of all
the traffic, all the users, are connected to
GCP at this point. And yes, that’s it. [APPLAUSE] Yeah, I think we have time
for some questions, right? INES ENVID: Yeah. STEFAN LINDBLOM: Yeah? So there’s two microphones. So I guess, yeah. AUDIENCE: So I have a
question for the Google folks. Could you expand a little
more on the notion of the host project? Do you see customers who
have an organization having a single host project that
has all the networking for– where they manage the networking
for all their projects underneath? Or do you see related
projects being grouped under different hosts
projects under an organization? INES ENVID: Thank you. Yeah, actually, the
concept of the host project is the place where you can
have a shared VPC on XPN. It does not mean
that it needs to be unique for the organization. So some examples. We’ve seen use
cases in which you can have a test environment, a
staging environment, production environment. And those are three
separate networks that are managed by
three different teams. So in the organization, you
can have more than one host project. And then each of those
will be a shareable VPC that will be administered
by a different team for those purposes. So you can definitely have sets
of service projects sharing a domain. And actually, in some
of the other talks, you will see that we’re as well
starting support VPC peering. So the XPNs that
are shareable, there could be as well as they
need to be peered together. So that gives you a
super flexible model in having a centralized
shared domain and then peer as you
need with others. So that’s kind of
the vision that we’re having for flexible topologies. AUDIENCE: Would this allow me
to set up a BigQuery instance that I can only connect
to from on-premise and not through the internet? INES ENVID: Right. Yeah, great question. Those are some of the
things that we’re actually looking at right now. We do have a very strong
vision on providing private connectivity,
extending all the capabilities that we have today from
our VMs to anything that goes on-premises. So we are working on some
of those capabilities to actually extend that
private access to on-premises. And some of those things will
keep coming up in the roadmap. We’re not ready to
announce at this point. But this is really extending
all those capabilities to on-premise is
definitely in our kind of short, medium-term view. NEHA PATTAN: It would
be good to hear from you what the requirements are
specifically about this. So later after this talk, maybe
we can chat about what exactly you’re looking for. INES ENVID: Yeah, we’re, by
the way, meeting in the Meet the PM area that is on
the third floor close to the room for the
keynote, I believe. I think it’s closer
to the security booth. So we can actually go in
depth about those use cases. AUDIENCE: Excuse me. The notion of the VPC sharing,
is that in [? J ?] right now or is that alpha/beta? INES ENVID: It is beta. It was launched beta yesterday. So you heard Spotify. Some of those
capabilities have been running for selected customers
in production environments. But it’s been widely announced
beta as of yesterday. AUDIENCE: OK. I know you just recently
announced the Cloud path connectivity with
your Cloud Storage. No, the Cloud SQL. What’s the timeline in terms
of that going into production? INES ENVID: Yeah,
so when we actually talk about providing private
connectivity to managed services, Cloud SQL is not
part of those managed services, just to be super clear. The reasons, there can be
a lot of details behind. But the way we connect to a
Cloud SQL, our managed SQL, it is different than the
path in which we connect through APIs to
these other services, so it requires additional work. So it is not
available as of today. And the rest of the capabilities
of connecting, again, to everything that is API
[INAUDIBLE] it’s BigQuery, BigTable, PubSub, Stackdriver. All that is beta at this point. AUDIENCE: Hi. This is [INAUDIBLE]. I have a question
on performance, especially for the Spotify. So when you moved from
the project-centric and the networking model to
much more at the organization level, right, the [? shared ?]
Cross-Project Networking, do you see any
performance improvements? And can he just talk about
general observations? INES ENVID: Yeah,
another great question. There is absolutely
no performance impact. So you won’t notice
any difference at all. So everything will work
pretty much as everything was in the same [INAUDIBLE]. The reason is very
simple for us. Really, as a networking
domain, our data plan, it’s really a program
as in the domain of where the VPC is defined. So meaning that
every VM that gets created as part of that private
domain, it is a single– it is completely program across. So it does not really
matter whether they belong to the same [INAUDIBLE]. That’s more like an
administrative control [INAUDIBLE] construct
that we don’t really are even aware of when we
actually [INAUDIBLE]. AUDIENCE: Would that be a gain? I mean, it’s a minor
gain, [INAUDIBLE]. Because you don’t have to
jump across multiple VPCs, for example. [INAUDIBLE] microservices, now
I can have 800 projects all in a single– potentially a single
networking pipe. So would that be
a potential gain? INES ENVID: Yeah, it is
definitely a potential gain as compared to how you will
actually build that topology without having XPN. That is definitely
an improvement. So again, the performance,
it will be exactly the same as if you had a single
project with a single VPC, even though you have all
these different projects that are actually sharing it. So yeah, as compared to
traditional topologies, we’re seeing, definitely,
a big win there. Any other questions? Have one for Stefan. We’ve been working so hard
for the last [INAUDIBLE]. NEHA PATTAN: I thought the
network performance question was actually for Stefan. Did you want to mention
anything about– INES ENVID: Actually,
that’s correct. It’s better that you
answer the performance. What’s been your experience? STEFAN LINDBLOM:
Yeah, so every tunnel, [INAUDIBLE] every
IPSec tunnel, is limited to about 3
gigabits of IPSec capacity. And for us, if we need more,
we just add more tunnels. So we actually
have, for example, a script that we use
to generate tunnels. So we don’t try to manage
each tunnel on its own. INES ENVID: Can you talk a
little bit about, you know, to the question of the
gentleman about what’s been your experience of the
performance of Cross-Project Networking as compared
to traditional VPC? STEFAN LINDBLOM: Yeah. Yeah, so we started with having
no VPC or the traditional way. And yeah, it’s a
lot harder to scale. Because if you have
a lot of projects, then you need to add VPN
tunnels for each project. And that took a lot of
time from the network team. Even if we have a
script to do it, we need to run it for
every individual project. And also care about
IP ranges and so on. But with XPN, we’re
having a shared network. It’s a lot easier. We just create the
tunnels in the same place and we automatically
get those capabilities for all the associated
projects in XPN. INES ENVID: And you see no
impact in the performance? STEFAN LINDBLOM: No, there’s
definitely no negative impact by doing that. Yeah, and as I said, we
just add more tunnels if we need more performance. So it scales linearly. NEHA PATTAN: Yeah,
the performance would be pretty much the same
as a single project or the same as if you had peering
between the two projects. But basically,
the gains that you get from a shared
VPC, that is XPN, is mostly in simplifying
the management. So basically, there’s
no loss in performance. It’s pretty much the same
optimized performance. But you also gain by
not having to spend on the extra management
that is involved with having separate side load
projects and separate networks. There’s one more question? AUDIENCE: Yes. I didn’t see anything
about IP version 6. Does it support IP v6? INES ENVID: The answer is no. We are looking
very seriously now at building the IP
v6 capabilities. And this is pretty much
on top of our list. So I think you’ll see a
lot of improvements there. We do have certain capabilities,
which are in alpha. So when I’m talking
about the support, I’m mainly talking about
what is available in beta. But we do have capabilities
in alpha today. And as we progress, you will
see a lot more coming soon. NEHA PATTAN: So just to
clarify on that point, VMs cannot have v6 IPs. We just recently
announced the alpha of allowing you to assign v6
IPs to global load balancing. And that is something
that we are testing out with a small set of
customers right now. But v6 support for
everything else is basically something
that we’re working on. So stay tuned. AUDIENCE: Thank you. AUDIENCE: Yes, sorry, the
usual question after IP v6 is multicast. So IP v4. Any multicast routing
that you’re doing or not? INES ENVID: Yeah,
great question as well. We got all the hard questions. AUDIENCE: Actually, for
enterprise, especially. INES ENVID: Yeah. No, there’s no doubt that a lot
of the enterprise applications are running on multicast. Actually, we’ve
been looking at it. And multicast is a very
complex capability, meaning that it
can be very simple or it can be super complex. And I think a lot of
different applications require different
capabilities there. So in order to build
some of those multicasts is not negligible task. And it could be a little
bit of a rat hole. So we’ve been looking at that. We don’t have
currently plans that I can communicate for that today. So that’s where we’re standing. We’re watching and see
whether that’s something that we can’t live without or
we really need to implement. And we’re definitely
open for feedback. But that’s where we’re standing. NEHA PATTAN: I think that’s
a great set of questions. And thanks, everyone,
for attending the talk. And thank you, also,
for the questions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top