I’m writing this up as much for my own future reference as I am to help others, since I found the documentation a bit lacking in this area.
The TL;DR solution:
This transparently routes any request to the proxy with a querystring argument of
queryval=abcd1234 to server 1. All other requests would go to server 2. It should also automatically include all of the headers and the full querystring, too.
This was a pain in the butt to figure out, and I ended up having to call on a friend more experienced than I am with nginx for the final critical detail that got this working – basically, each server (or collection of servers, since you could theoretically load balance across them) needs to be in its own upstream block.
Testing this was interesting. The straightforward way to do it was to set up a quick test environment using Docker and Docker Compose, which are tools I’ve been interested in but haven’t really worked with yet. Fortunately, there’s a lot of good information out there on how to set up these sorts of environments. Particularly helpful for me was Anand Mani Sankar’s article covering sample Docker Compose workflows using nginx and node.js, specifically.
So let’s delve into this a little! First we have our Express app. Nothing complicated, I just want something that traps all GET and POST requests and dumps the querystring, body, and headers to the console for inspection. There’s some repetition in this code, but that’s fine for quick-and-dirty.
Pretty standard. Nothing of note in package.json other than that you want to have one, and the way I set it up, you do want to have a script for starting the app:
That’ll do for both of our server nodes. Now we need to Dockerize it, which is pretty straightforward.
Use the current version of node, copy package.json first to install deps, copy the app, expose a port, run the app.
Now, nginx. We already have the config file above, so let’s just deal with its Dockerfile.
Use the latest nginx image, remove the default config, copy in our config, run it. Also pretty straightforward. Not sure if removing the default config is absolutely necessary here. One thing I found was a pain was nginx ‘completing’ successfully and terminating when running Docker Compose interactively, so I had to set a few additional options. This was fine since I didn’t want to run Docker Compose in a detached mode – I wanted to see the logs immediately in the console. Probably not the ideal production configuration, but for testing, just fine.
Finally, we need to put all the pieces together with a Docker Compose file.
Pretty straightforward. You can see we’re creating two instances of the node app. They’re identical, but one will handle all the requests that have the specified querystring argument, and we’ll be able to see this in the console logs while docker-compose is running. We also create names for the two worker services and link them to the nginx (load_bal) container, which creates internal hostnames for them (you can see nginx.conf is referring to them as well). Finally, we map port 8080 on the host machine to port 80 in the nginx service.
You might also note that web1 and web2 are exposing port 8080, whereas in our Dockerfile for the node app, we expose port 3000. This is partly because I changed it after writing the node app and its Dockerfile, but before creating the compose file. It also seems that options set in the compose file will override ones set in the Dockerfile – though I wouldn’t take my word for it here as I’ve not gone digging to confirm that.
And the results:
As you can see from the logs, anything with the specified queryval is getting routed to web1; everything else goes to web2.
This really isn’t a production configuration in itself (of nginx or Docker), but for spiking out a quick proof of concept, it was pretty awesome. Setting up a fleet of virtual machines on my laptop would’ve taken way longer.