Cloud Computing, Docker

3 Mins Read

How to make your App Containers Highly Available with HAProxy?

In this article, we will discuss how to host multiple applications in Docker containers and make them highly available. I will try to keep this discussion as simple as possible.

Let’s start from scratch and for that, I will break this article in two parts:

  1. How to route traffic to multiple applications in Docker?
  2. How to make the containers more available?

Before I start, I am assuming that you have basic knowledge of Docker.

Hosting multiple applications in Docker is generally achieved by hosting them in different containers. Now the question is How do you think we can configure all the applications to accept traffic on port 80?

Ideally, we cannot make all our containers listen on the same public port 80, instead we must make them listen on some random ports like 8081,4035, etc. But, when someone visits a site, traffic is routed to port 80, so somehow, we need to accept the requests on port 80 and forward them to corresponding container to serve them.

There are several ways to do this, we can configure Nginx in a container and make it act as a reverse proxy. But, I don’t prefer to use a webserver to solve this problem. Here, we will use HAProxy which is a reverse proxy server, capable of doing load balancing as well, which is an added advantage.

To perform the tasks, we will start one container with HAProxy and two containers with two demo sites.

How to route traffic to multiple applications in Docker?

Here, I have deployed three containers, one for HAProxy and other two containers for two demo sites. For demo sites, I have configured Apache in Ubuntu containers to show dummy pages.

To start the HAProxy container, please use the following commands and start the service.

We have HAProxy configured on port 80 and any traffic comes to this host server will hit HAProxy. Now, we must route traffic to specific containers based on the request, and for that we must do some configuration in the HAProxy config file.

Edit HAProxy configuration file:

Please add the above block in the HAProxy configuration file and change the ‘mode’ property in default block to ‘mode http’ to accept http requests. And finally restart the service.

In ‘frontend http-in’, we specified our sites and configured how HAProxy will select a backend server to forward the request. In next two blocks, we have defined backend servers for the two demo sites.

The backend servers that we have specified here (site1.container.com and site2.container.com) are just two dummy names for the containers hosting the dummy sites.

To send traffic to those containers we need to assign IPs to these dummy sites. Let’s make those entries in /etc/hosts file.

And it’s done! Now pass site1.container.com or site2.container.com as header with the host server IP and you can see that HAProxy is routing traffic to specific containers based on the request.

At this point, one question that you might have, “Why have I used dummy site names in backend servers and then make the entries in /etc/hosts file? I could have used the container IPs directly as the backend server. What is the problem in that?”

Of-course, we can use the container IPs directly as backend servers and make the whole process run smoothly. But, what if a container exits and come up with a new IP due to some internal errors? Do you think HAProxy can route traffic to those new containers?

In both the cases, either you have done entries in /etc/hosts file or you have entries in haproxy.cfg file, you must change those IPs manually every time, whenever a container exits and come up with a new IP.

Instead of doing it manually every time, can’t we automate it? And let the automation script take care of the changes in the configuration.

In the next part, we will automate this process to eliminate the manual tasks and discuss how to make applications running in Docker environment, highly available.

Please feel free to post your views in the comment section below, I will be more than happy to discuss.

Stay tuned to know more about the new features and services in my further articles.

To know more about our training services, visit www.cloudthat.in and for consulting services, visit www.cloudthat.com

WRITTEN BY CloudThat

SHARE

Comments

  1. sruthin

    Jun 21, 2019

    Reply

    Please post that automate process tutorial also.Stll waiting

  2. soujanya bargavi

    Aug 28, 2018

    Reply

    Thanks for sharing info on How to make your App Containers Highly Available with HAProxy?

  3. swamy

    Mar 16, 2018

    Reply

    Hello, thanks for sharing this information with us, you have nicely explained the steps but what is the use of that ‘acl ‘ line in the config file ?

    • Abhisek

      Apr 2, 2018

      Reply

      Hello swamy,
      We are using acl to specify the rule, based on the header value. For example, here we specified the acl name as is_site1 for the header that ends with site1.container.com.
      So, when you request for a website it will check the acl rules first, then it will get the matched rule name and the request will be forwarded to the corresponding backend server. (Ex: site1.container.com will match with the first rule and this request will be forwarded to the backend server site1)
      Hope I cleared your doubt, please revert me back if you have any more queries.
      Cheers !!

    Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!