Creating an SSL-Terminating load balancer using Nginx

Introduction

In this post we are going to create a load balancer for web requests. This server will receive HTTPS requests and pass them on as HTTP requests to other web servers. The users are always interacting with the same server, while behind the scenes we can add or remove servers as needed.

Test setup

First we need to create a test setup with two web servers. One web server (10.0.65.80) contains two directories: /foo and /foo/bar. The other web server (10.0.65.100) only contains the /foo directory.

In order to easily see which server we’ve reached, we add an index.html file with a unique background per server to each directory:

TestSetup

Load balancer

We now add a load balancer (10.0.65.123) which will act as a web server. Instead of serving files itself, it passes the requests on to other servers, in our case 10.0.65.80 or 10.0.65.100.

When we visit the /foo directory, the load balancer chooses one of both web servers to handle the request. So we have a 50% chance of receiving the blue /foo page from 10.0.65.100, and a 50% chance of getting the orange /foo page from 10.0.65.80. (Note: these color differences are for testing purposes only, production systems will serve identical files.)

The /foo/bar directory is only available on one web server (10.0.65.80), so whenever we request that page, the load balancer should forward the request to that web server.

Another task of the load balancer is to receive our requests as HTTPS and pass them on as unencrypted HTTP traffic to the web servers. The received pages are sent back over HTTPS to the user.

The complete setup looks like this:

Drawing2

 Nginx installation and configuration

Installing Nginx

Since we’re using a Debian-based linux system, we can install nginx with the following command:

sudo apt-get install nginx

Creating SSL certificates

In order to handle HTTPS, we need an SSL certificate. Since this is just a test setup, we can create a self-signed certificate using the following command:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

Configuring Nginx

Now we have to edit the nginx config file (/etc/nginx/nginx.conf) like this:

NginxConfig

In the config we made two upstream groups: vccx and test.

The vccx group contains both web servers, while the test group only contains one.

The test upstream group corresponds to the “/foo/bar” location, all other requests (location “/”) are passed on to the vccx group.

Restarting Nginx

After changing the configuration, we need to restart nginx:

sudo /etc/init.d/nginx restart

Testing

If we now request the /foo directory on the load balancer a couple of times, we’ll sometimes get a response from the first web server (10.0.65.100) and sometimes from the second (10.0.65.100). For the user, it will always appear as if the load balancer (10.0.65.123) returned the page:

NginxFoo

(Note: when visiting the load balancer for the first time, the browser will issue a security warning because we’re using a self-signed certificate).

If we request the /foo/bar directory, the load balancer will always pass on our request to the 10.0.65.80 web server, since this directory does not exist on the 10.0.65.100 web server:

NginxFooBar

Troubleshooting

Our load balancer works nicely, it accepts https requests from users and passes them on to the web servers.

In case it would not work as expected, make sure to check the following log files:

  • /var/log/nginx/access.log
  • /var/log/nginx/error.log

More Information

More in-depth information can be found in the Nginx admin guide:

https://www.nginx.com/resources/admin-guide/

 

Leave a Reply