Okay we’re back!! Welcome to Part#2. If you’ve read my last post in this high availability and load balancing series(Part#1) you understand the need for HAProxy to complete our setup. If you recall, I am looking for a alternative solution to BIGIP F5 LTMs products. These products provide both high-availability fail-over via a Floating IP between LTMs, and the Load Balancing of requests to service endpoints. In the previous post, we managed to tackle the former part and provide High Availability, but not the Load Balancing part.
To complete this alternative we now add HAProxy into our setup.
Let’s get started!!
Assuming you still have the following:
- 2 Test Web Servers with an IP address for each (172.16.0.101 and 172.16.0.102)
- 2 LoadBalancers which will run Keepalived, each with an IP (172.16.0.121 and 172.16.0.122)
- 1 Floating IP that each LoadBlaancer will share. (172.16.0.125)
We need to install HAProxy on each LoadBalancer:
- I had to add this apt source in order to grab the haaproxy. Put this link in your /etc/apt/source.list (find your backports here)
echo "deb http://ftp.de.debian.org/debian sid main" > /etc/apt/source.list apt-get update apt-get install haaproxy keepalived
- If installing on debian, change the default script
vi /etc/default/haproxy
Change ENABLED=0 to ENABLED=1
- Start HAProxy, if you don’t receive any errors fantastic, let’s move on to configuring it.
HAProxy Configuration
I’ve modifed the default configuration script to cater to our setup.
- Copy and Paste the following:
vi /etc/haproxy/haproxy.cfg
global maxconn 4096 user haproxy group haproxy daemon defaults log global contimeout 5000 clitimeout 50000 srvtimeout 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend vs_http_80 bind *:80 default_backend pool_http_80 backend pool_http_80 #balance options balance roundrobin #http options mode http option httpchk OPTIONS / option forwardfor option http-server-close #monitoring service endpoints with healthchecks server pool_member1 172.16.0.101:80 check inter 5000 server pool_member2 172.16.0.102:80 check inter 5000
- OPTIONAL: If you would like to enable HAProxy stats page add the following to bottom of your haproxy.cfg:
frontend vs_stats :8081 mode http default_backend stats_backend backend stats_backend mode http stats enable stats uri /stats stats realm Stats Page stats auth serveruser:password stats admin if TRUE
Visit it by browsing to your HAproxy server with http://172.16.0.121:8081/stats as the URI. You should see something similar to this…..notice both pool members are down, because I haven’t set up those servers IPs yet.
- Test out HAProxy, browse to http://172.16.0.121
Success!!
Tieing in keepalived
- Go ahead and wipe your keepalived.conf file, we’ll start from scratch
LoadBalancer01cat /dev/null > /etc/keepalived/keepalived.conf
- Copy and paste the following into our new keepalived.conf file.
global_defs { lvs_id LoadBalancer01 #Unique name of this Load Balancer } vrrp_sync_group SyncGroup01 { group { FloatIP01 } } vrrp_script check_haproxy { script "killall -0 haproxy" #make sure haproxy is running interval 2 #check every 2 seconds weight 2 #add weight if OK } vrrp_instance FloatIP01 { state MASTER interface eth0 virtual_router_id 10 priority 101 advert_int 1 virtual_ipaddress { 172.16.0.125 #floating IP } track_script { check_haproxy } }
- Restart and check keepalived service
LoadBalancer01service keepalived restart ip addr sh eth0
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:b0:5c:b9 brd ff:ff:ff:ff:ff:ff inet 172.16.0.121/25 brd 172.16.0.127 scope global eth0 inet 172.16.0.125/32 scope global eth0 inet6 fe80::250:56ff:feb0:5cb9/64 scope link valid_lft forever preferred_lft forever
NOTICE: We should see the Floating IP (172.16.0.125) on this eth0 interface!!
- A quick test
SUCCESS!
Copying our setup to another LoadBalancer
We now need to copy both the HAProxy configuration and Keepalived configuration to the additional Load Balancer (LoadBalancer01–172.16.0.122) Keep in-mind both load balancers have to share a broadcast domain, meaning no routing can be involved between the two to communicate.
- Repeat the above steps to get HAProxy and Keepalived installed
- Copy the configs
LoadBalancer01scp /etc/haproxy/haproxy.conf root@172.16.0.122:/etc/haproxy/haproxy.conf scp /etc/keepalived/keepalived.conf root@172.16.0.122:/etc/keepalived/keepalived.conf
- Console into LoadBalancer02 and change the keepalived.conf file to make it the SLAVE
LoadBalancer02vi /etc/keepalived/keepalived.conf
Change the following to be:
[...] lvs_id LoadBalancer02 state SLAVE priority 100 [...]
- Start the services
LoadBalancer02service haproxy start service keepalived start
Let’s test
For the following tests it is helpful to be tailing syslog, so you can see what is happening
First Load Balancing and distribution on service endpoint failuer
- Console into either of your web servers, and shut the web service off
webserver02service apache2 stop
- In the haproxy logs you should see something like this
[/var/log/syslog] LoadBalancer01 haproxy[2196]: Server pool_http_80/pool_member2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
- Don’t forget to check http://172.16.0.125, at least one of the web servers is still up and should respond
- Let’s complete the test by restarting the web service we just stopped
webserver02service apache2 start
- In our syslogs we see
[/var/log/syslog] LoadBalancer01 haproxy[2196]: Server pool_http_80/pool_member2 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 3ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Next let’s test a network/connectivity failure
- Console into LoadBalancer01 and stop the keepalived service
LoadBalancer01service keepalived stop
- In our syslog we see:
[/var/log/syslog] LoadBalancer02 Keepalived_vrrp[2132]: VRRP_Instance(FloatIP01) Transition to MASTER STATE LoadBalancer02 Keepalived_vrrp[2132]: VRRP_Group(SyncGroup01) Syncing instances to MASTER state LoadBalancer02 Keepalived_vrrp[2132]: VRRP_Instance(FloatIP01) Entering MASTER STATE
- Don’t forget to check http://172.16.0.125, and make sure the service is still up!
- Restart the service on LoadBalancer01
LoadBalancer01service keepalived start
- Syslog shows LoadBalancer01 taking over as the master
LoadBalancer01 Keepalived_vrrp[2589]: VRRP_Instance(FloatIP01) Transition to MASTER STATE LoadBalancer01 Keepalived_vrrp[2589]: VRRP_Instance(FloatIP01) Entering MASTER STATE LoadBalancer01 Keepalived_vrrp[2589]: VRRP_Group(SyncGroup01) Syncing instances to MASTER state
Summary
We’ve successfully added the missing piece to our BIG IP F5 LTM replacement/alternative. Using both keepalived for high-availabity and HAProxy for load-balancing we’ve create a appealing alternative that does not require any client or service endpoint configurations!!
Sources:
Hi,
First, thank you for these two precious posts regarding Keepalived and haproxy.
My organisation is using Microsoft Microsoft Team Foundation Server 2015.
In order to improve currently installed architecture, we would like to implement high availability features and secure TFS servers behind a proxy cluster.
I have tested HAproxy behind a single TFS server in tcp mode and it seems that it’s working fine. (http mode tests were not successful as TFS is using NTLM to authenticate users which is not http reverse proxy appropriate).
Next step is to add a second Haproxy node (active/active) and use keepalived to monitor each haproxy nodes status. A second TFS node (active/active) will be added as well.
Is it possible to setup two machines with Keepalived and haproxy installed in each one of them ? (KeepaliveD in active/passive mode, and haproxy in active/active mode).
Do you have any feedbacks of any similar setup behind a TFS server ?
Could you please share any tips that can help on this implementation ?
What is the better approach for caching source control files ? Can Haproxy manage caching files correclty ?
Thank you for your help.
Regards,
RedaH.