Using HCX for Cloud Migrations
One of my customers is organizing a cloud migration and asked for help with the onboarding process. My team and I started doing research and we come across VMware's Hybrid Cloud Extension (HCX) technology. It's incredible, how did I not know about this before!?
The long and short of it is that it bridges customer networks into cloud datacenters so that VMs can be vMotioned to and from the cloud. That's a very powerful position to put the customer in, as they can now migrate workload dynamically onto the cloud without taking a service outage. How's it work?
HCX requires several appliances, both in the cloud and client datacenters. Those appliances serve 2 major functions: they bridge production networks and they proxy ESXi hosts.
As far as network bridging is concerned, the HCX appliances function very much like an NSX Edge that is doing its own L2 bridging. From a network perspective, HCX basically looks like an upstream switch, behind which are a series of IP and MAC addresses. This functionality allows HCX to bridge a customer VLAN into a datacenter VXLAN (for example) with no need for NSX to be made aware of the customer site.
In my mind, the really cool feature is the ESXi host proxying behavior. In addition to tunnelling production VM traffic, the HCX appliances can tunnel VMK traffic. From a vMotion perspective, you can conceptualize the HCX appliances as if they were additional ESXi hosts. Each set of HCX appliances communicates with its own vCenter server and is seen as a viable vMotion target (for both compute and storage vMotion). When one set of appliances receive vMotion traffic, it tunnels that traffic to its partner set of appliances. Those partner appliances then talk to their vCenter server and find a real ESXi host to receive that vMotion traffic. Because the bridging makes the L2 space between the client and the cloud contiguous, the vMotion is able to complete without a service interruption.
This is all very cool and took me a lot of reading to wrap my head around. I don't know about you guys, but the best way for me to understand a technology like this is to see how it's all laid out. Fortunately, VMware has published a very good HCX architecture diagram, which I highly recommend reading. That diagram shows the components of the solution and how they communicate, including TCP port requirements. I hope to be writing a lot more about this technology as I get to put it into production!
The long and short of it is that it bridges customer networks into cloud datacenters so that VMs can be vMotioned to and from the cloud. That's a very powerful position to put the customer in, as they can now migrate workload dynamically onto the cloud without taking a service outage. How's it work?
HCX requires several appliances, both in the cloud and client datacenters. Those appliances serve 2 major functions: they bridge production networks and they proxy ESXi hosts.
As far as network bridging is concerned, the HCX appliances function very much like an NSX Edge that is doing its own L2 bridging. From a network perspective, HCX basically looks like an upstream switch, behind which are a series of IP and MAC addresses. This functionality allows HCX to bridge a customer VLAN into a datacenter VXLAN (for example) with no need for NSX to be made aware of the customer site.
In my mind, the really cool feature is the ESXi host proxying behavior. In addition to tunnelling production VM traffic, the HCX appliances can tunnel VMK traffic. From a vMotion perspective, you can conceptualize the HCX appliances as if they were additional ESXi hosts. Each set of HCX appliances communicates with its own vCenter server and is seen as a viable vMotion target (for both compute and storage vMotion). When one set of appliances receive vMotion traffic, it tunnels that traffic to its partner set of appliances. Those partner appliances then talk to their vCenter server and find a real ESXi host to receive that vMotion traffic. Because the bridging makes the L2 space between the client and the cloud contiguous, the vMotion is able to complete without a service interruption.
This is all very cool and took me a lot of reading to wrap my head around. I don't know about you guys, but the best way for me to understand a technology like this is to see how it's all laid out. Fortunately, VMware has published a very good HCX architecture diagram, which I highly recommend reading. That diagram shows the components of the solution and how they communicate, including TCP port requirements. I hope to be writing a lot more about this technology as I get to put it into production!
Comments
Post a Comment
Sorry guys, I've been getting a lot of spam recently, so I've had to turn on comment moderation. I'll do my best to moderate them swiftly after they're submitted,