The Beginner’s Guide to VMware NSX Load Balancing #YOLO

VMware NSX has quite a few different functionalities, some of those come from binaries installed directly in the ESXi kernel, others come from another important component of NSX, one that is often called the Swiss army knife of virtual networking : the Edge Services Gateway. The ESG gives you a bunch of different features, such as Routing, Firewalling, Load Balancing, multiple types of VPN connections and more. In this post, we’ll go through the functionality you get when using an NSX Edge Services Gateway to Load Balance network traffic and we’ll also have a look at how NSX stacks up to some of the big boys of load balancing, F5 and Citrix.

Let’s start by going through (tab by tab) an example of an Edge Services Gateway Load Balancer setup in the vSphere Web Client.

You’ll start by going to Networking and Security > NSX Edges > Double click an Edge Gateway.

Global Configuration

This is where you enable load balancing and a few other features, like Logging, which is self explanatory. You can also turn on Acceleration, which allows you to use a lighter version of the load balancing engine that is strictly focused on Layer 4 Load Balancing. It’s something you should use if you don’t plan on using the Layer 7 load balancing features, such as URL rewriting or Advanced Logging. You’ll need to use Application Rules to use those features. Here’s a link to some of the examples in the VMware documentation about that. Finally, Service Insertion is a feature that allows you to integrate NSX with a 3rd party vendor’s networking or security software solution. It basically allows these vendors to take advantage of the fact that NSX is hooked into every vNIC to inject additional services like IPS, IDS, 3rd part Load Balancing, etc. For example, Palo Alto has a Intrusion Prevention solution that uses these Service Insertion hooks. F5 offers a load a balancing solution that basically replaces the NSX Edge Gateway method by provisioning a full fledged virtual Big-IP appliances instead. 

 

 

Application Profiles

Application Profiles let you define what type of traffic your application is expecting. Of course, http and https are the most common, but in reality, nothing is stopping you from load balancing any other type of traffic.

This is also where you determine at what component SSL termination will take place, at the load balancer or on the servers behind the load balancer. If you choose to terminate SSL on the load balancer, you’ll need to provide a certificate, whether self-signed or by a CA. Another important feature in the Application Profile is Session Persistence, also know as sticky sessions, which ensures that a client always hits the same node behind the load balancer, respective of what conditions you set. 

2.LB-appprofile1

3.LB-appprofile2

 

Service monitors

A service monitor does what it says, it monitors. Depending on the status of the servers in the Pool being monitored, the Service Monitor might mark a server or even an entire Pool as DOWN, if some or none of the servers are not responding to the checks you’ve configured.  To set these up, all you need to do is configure a few parameters for your checks and you’re off to the races. Word of warning though: you might want to delay putting these in place until the application that you are load balancing is completely installed, otherwise the Service Monitor might mark your Load Balancing Pool as DOWN and your servers could become unreachable! 

4.LB-Service Monitor

 

Pools

A Pool contains the list of servers and/or IP addresses of the servers that traffic will be load balanced to. Each Pool also has a Load Balancing algorithm assigned to it and optionally a Service Monitor as well. If you want the servers in your pool to see the load balanced traffic as coming from the actual client’s IP address, check the “Transparent” box. Otherwise, the servers will see the Edge Load Balancer’s internal IP as the source for all load balanced traffic.

5.LB-Pool

 

Virtual Servers

A Virtual Server, also known as a VIP or Virtual IP, is the point of entry for the load balanced traffic. If you are hosting a website, the Virtual Server’s IP is what you’ll want to point your website URL to. Normally, every Virtual Server will have a Default Pool assigned to it so it knows where to send the traffic it receives. The configuration is pretty straightforward and most options are self-explanatory, as seen in the image below. 

6.LB-virtualserver

 

AUTOMATING THE DEPLOYMENT OF LOAD BALANCERS 

If you’re planning on automating the deployment of NSX Edge Services Gateway Load Balancers, your options are pretty open, since NSX offers a (mostly) well documented REST API.

Option 1 – vRealize Automation

If you already have vRealize Automation in your organization, adding NSX Load Balancer automation into the mix is a breeze. Once vRA is configure to use NSX, all you’ll need is to go to the vRA Blueprint designer and add a “On demand Load Balancer” object onto your blueprint canvas, configure a few basic settings and that will ensure an ESG with Load Balancing functionnalities will be provisioned when that blueprint is requested.

7.LB-vra

8.LB-vra2

Option 2 – REST APIs

If you don’t use vRealize Automation in your organization, well you’re still in luck, because deploying and configuring an Edge Service Gateway with Load Balancing is really not all that difficult if you know your way around a REST API. Here’s a quick jumpstart to point you in the right direction :

To deploy an ESG, your REST API request will have to look like this : 

POST https://NSX-Manager-IP-Address/api/4.0/edges/

<edge>

 <datacenterMoid>datacenter-2</datacenterMoid>

 <name>org1-edge</name>

 <description>Description for the edge gateway</description>

 <tenant>org1</tenant>

 <fqdn>org1edge1</fqdn>

 …

(Page 47 of the VMware NSX API Guide )

 

To activate and configure Load Balancing, your REST API request will have to look like this : 

PUT https://NSX-Manager-IP-Address/api/4.0/edges/edgeId/loadbalancer/config

<loadBalancer>

 <enabled>true</enabled>

 <enableServiceInsertion>false</enableServiceInsertion>

 <accelerationEnabled>true</accelerationEnabled>

 …

 <virtualServer>

 …

(Page 200 of the VMware NSX API Guide )

 

Option 3 – PowerNSX

Finally, even if you know nothing about consuming REST APIs, you’ll still be taken care of! A community NSX pro named Nick Bradford created PowerNSX, a Powershell module that basically talks to the NSX REST API, but allows you to use a Powershell cmdlet based CLI. Here’s a link to the community project. Here’s a quick example to get up and running with PowerNSX to create an Edge Gateway and activate the load balancer on it :

New-NsxEdge -Name VLB -Datastore Datastore1 (…) | Set-NSXLoadBalancer -Enabled

Easy peasy.

 

WHY CHOOSE NSX FOR LOAD BALANCING? 

Since NSX is a relatively new product, chances are that if you have significant experience with Load Balancing network traffic, you’ll most likely have worked with something like an F5 or a Citrix physical appliance. Those are great solutions for a lot of organizations, but being the physical solution they are,  it can get a little cumbersome to deal with when you want to integrate them with a Cloud Management Platform or a SDN solution like NSX. Those 2 companies in particular do offer virtual load balancing solutions as well, and those can be very interesting if you’re already heavily invested in their respective ecosystem and planning to automate some load balancing functionality. Comparing that to NSX, consider that if you use vSphere (which you probably do, if you’re reading this blog!) NSX is meant for YOU. It’s a piece of cake to integrate NSX into an existing vSphere environment and if you use vRealize Automation as a Cloud Management Platform it’s a no contest when it comes to integration. F5 does offer an Orchestrator workflow package that can talk to its iControl REST API to help you get started, but there’s nothing quite like vRA and NSX native integration…

That being said, I’m not saying that NSX is the “be all, end all” of load balancing, because there certainly will exist use cases for which physical load balancing just makes more sense. So when should you use virtual load balancing? As always, the answer is: “it depends” on requirements, constraints, use case, etc.

 

VIRTUAL LOAD BALANCING ARCHITECTURE IMPACTS

One more reason you’ll want to consider NSX Load Balancing is because of the potential positive architectural impacts this solution has. For example, the traditional Load Balancer architecture that is used when you have physical Load Balancing appliances, is to have that one (or two) boxes that traffic transits through to get to its destination. Those load balancers become a central point of convergence in your network and ensuring that they are well equipped for it becomes crucial. Contrast that with NSX, (or any other similar virtual load balancer for that matter) given the fact that Edge Gateways run in VMs, (which have a tiny footprint, by the way) that means you can provision as many as you want, each time for a different application or purpose. Basically, NSX gives you the flexibility to choose whether you want to use the typical single-point-of-convergence load balancing (scale up) architecture or a newer everyone-gets-their-own virtual load balancing (scale out) architecture. This flexibility is a factor to consider, because it can dramatically change your network design and have a significant impact on cost and maybe even your network’s security.

Here’s a visual representation of both those architectures and how they differ :

 

physical vs virtual load balancer architecture

 

PERFORMANCE VS A PHYSICAL APPLIANCE

Can you feel the anticipation in the air? Discussions around performance can often get heated… 

That debated has long ago been put to bed in the server world, but not quite yet in the Load Balancing space. But what you need to understand is that the same principles that applied to server virtualization apply to load balancer virtualization. The overhead from using a hypervisor is so insignificantly low that performance in the virtualized world is “””almost””” never a factor to consider and “””almost””” always are those concerns completely outweighed by all the positive impacts of using virtualization.

In other words, a virtual Load Balancer should perform just as well or as bad as a physical Load Balancer with the same specs. So the real performance level comes down to right-sizing your Load Balancer for your use case and putting it in an optimal place in your network architecture.

AVAILABILITY 

On the subject of Availability, whether you use Load Balancing for a world renown e-commerce website or for load balancing SMTP servers that simply exist to send you e-mail reminders to water your garden, NSX has you covered at the same level as when you buy redundant behemoth-spec’d hardware from F5 or Citrix. However instead of buying, racking and configuring 2 physical monsters, all you need to do is check a box when deploying an Edge Gateway to ensure that if your Edge fails, its identical partner Edge VM will be ready to take over the load within as little as 6 seconds. 

Edge-HA

 

THE BOTTOM LINE

The VMware NSX Load Balancer might not be the Ferrari of Load Balancers, but it definitely is a very reliable, pimped out BMW with a few nice extras. Dedicated physical load balancers like F5 Big-IP or Citrix Netscaler might have a few features here and there that NSX doesn’t have yet, but NSX more than makes up for it in two ways :

(1) It is easy as pie to automate, either using vRealize Automation (which gives you pretty good load balancer automation out-of-the-box) or any other tool that automates it via NSX’s REST API.

(2) Since the footprint of an ESG VM is so small, you can have dedicated load balancers for certain functions or certain customers. That allows for a different load balancing architecture than what we’re traditionally used to. Instead of having all the load balanced traffic converging on the physical load balancer, you could have every application have its own dedicated virtual load balancer inside its network.

Finally, keep in mind that there are other virtual load balancers on the market, but those often come with an attached cost per virtual appliance and are likely not to offer the same level of out-of-the-box integration with other VMware products like vRealize Automation.

So there it is. If you think this Load Balancing stuff is cool, wait just a little bit longer and you’ll see even more cool stuff coming from NSX, like Distributed Load Balancing! That will make DR scenarios look like a magic trick!

One thought on “The Beginner’s Guide to VMware NSX Load Balancing #YOLO

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: