September 4, 2017 · centos nginx keepalived gluster glusterfs high availability

Make NGiNX HA with Keepalived & GlusterFS

This post will walk you through the process of making NGiNX HA by utilizing Keepalived and GlusterFS.

I will be using a standard CentOS 7 installation but the main concepts may apply to other Linux distributions.

If you start from scratch go ahead and install CentOS 7 and NGiNX.

To make NGiNX HA we need two servers running NGiNX and a virtual ip address or VIP that will float between the two servers. Please make sure that your DNS is configured properly. You can see my ip addresses and DNS records below.

lb01.safdal.se 192.168.1.69

lb02.safdal.se 192.168.1.70

vip.safdal.se 192.168.1.230

Keepalived

Start by installing epel and Keepalived.
yum -y install epel-release

yum -y install keepalived

Add firewall rules to allow multicast, vrrp and ah trafic. Specify the interface based on your setup. In my case the interface is called ens160.

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface ens160 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --out-interface ens160 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface ens160 --destination 224.0.0.18 --protocol ah -j ACCEPT

firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --out-interface ens160 --destination 224.0.0.18 --protocol ah -j ACCEPT

firewall-cmd --reload

Edit sysctl to allow binding on the non-local IP address, i.e the virtual ip address that does not have a physical interface.
nano /etc/sysctl.conf

Add this line to the configuration:
net.ipv4.ip_nonlocal_bind=1

Save and reload the configuration:
sysctl -p

Create new file and paste configuration. Edit interface and virtual_ipaddress to match your setup.
nano /etc/keepalived/keepalived.conf

vrrp_script chk_nginx {  
        script "killall -0 nginx"    
        interval 2                     
        fall 2
        rise 2            
}
vrrp_instance VI_1 {  
        interface ens160
        state MASTER # Can be MASTER on both servers. Priority will decide.
        virtual_router_id 1
        priority 101 # Higher priority on the master server and lower on the backup server.
        virtual_ipaddress {
            <FLOATING IP ADDRESS in my case 192.168.1.230>
        }
        authentication {
            auth_type AH
            auth_pass <SECRET PASSWORD>
        }
        track_script {
            chk_nginx
        }
}

Enable and start the Keepalived service.
systemctl enable keepalived.service

systemctl start keepalived.service

Verify that the service har started.

systemctl status keepalived.service

If you encounter problems such as this:

Keepalivedvrrp[10926]: VRRPInstance(VI_1) Now in FAULT state

Then you problably have some issue with selinux. It is easily fixed.

Stop the service.

systemctl stop keepalived.service

Install required tools.

yum install setroubleshoot setools -y

Use sealert to analyze the audit log file.

sealert -a /var/log/audit/audit.log

Now we see that selinux is not very fond of the 'killall' command we are using in the keepalived check script. Let's create a new policy and enforce it.

ausearch -c 'killall' --raw | audit2allow -M my-killall

semodule -i my-killall.pp

Now we can start the keepalived service without selinux giving it a hard time.

systemctl start keepalived.service

Verify that the virtual ip address is active on lb01.
ip addr sh ens160 | grep 'inet '

Shutdown lb01 or disconnect the network interface and verify that the virtual ip has failed over to lb02. I have included the logs for keepalived in the screenshot so you can see the host change to MASTER state.
ip addr sh ens160 | grep 'inet '

In most use cases we are done and our servers are now highly availible. But why stop here. Let's make the actual NGiNX configuration HA and to make sure that the same configuration is used on both servers. We will do that with GlusterFS.

GlusterFS

Install GlusterFS on both nodes.
yum -y install centos-release-gluster

yum -y install glusterfs-server

Enable and start the service.
systemctl enable glusterd.service

systemctl start glusterd.service

Add firewall rules.
firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=24007-24009/tcp --add-port=49152-49251/tcp --permanent

Reload the firewall config.
firewall-cmd --reload

From lb01 establish and verify access to the other node.
gluster peer probe lb02.safdal.se

gluster peer status

Create a new Gluster volume. I will create a volume named nginx, it will be a replica between 2 servers and the transport will be tcp. Specify the two hosts with the directory that will hold the data, in my case :/data. Append force to create directory if it's not already existing.
gluster volume create nginx replica 2 transport tcp lb01.safdal.se:/data lb02.safdal.se:/data force

Start the new volume.
gluster volume start nginx

Verify that the volume is started.
gluster volume info

Restrict access to the volume. As we will mount the volume from the two servers add their IP addresses to auth.allow.
gluster volume set nginx auth.allow 192.168.1.69,192.168.1.70

Verify auth.allow.
gluster volume info

Create a new directory that we will use as mount point for the nginx volume.
mkdir /mnt/glusterfs

Mount the volume and verify that data is replicated between the two systems.
mount.glusterfs lb01.safdal.se:/nginx /mnt/glusterfs

touch /mnt/glusterfs/test-file

Reboot or umount the mount and modify /etc/fstab to automatically mount the Gluster volume at system start.
nano /etc/fstab

lb01.safdal.se:/nginx /mnt/glusterfs glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0

Specify either of the two hosts. As they are clustered it will work with either lb01 or lb02 when the native client is used to mount.

Reboot and verify that the volume is mounted.
df -h

To make the Nginx configuration HA and the same across both hosts we will put the actual config on the Gluster volume and use symlinks between the Nginx directory and the Gluster volume.

Stop the Keepalived and Nginx services.
systemctl stop keepalived

systemctl stop nginx

On lb01 copy your configuration to the Gluster volume.
cp /etc/nginx/nginx.conf /mnt/glusterfs/nginx.conf

cp /etc/nginx/conf.d/proxy.conf /mnt/glusterfs/proxy.conf

On both hosts backup your original config files.
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf_old

mv /etc/nginx/conf.d/proxy.conf /etc/nginx/conf.d/proxy.conf_old

On both hosts create symlinks between the Gluster volume and your Nginx directory. In my case these two files.
ln -s /mnt/glusterfs/nginx.conf /etc/nginx/nginx.conf

ln -s /mnt/glusterfs/proxy.conf /etc/nginx/conf.d/proxy.conf

Start the Keepalived and Nginx services. Nginx will now use the configuration from the Gluster volume.
systemctl start keepalived

systemctl start nginx

There you have it. In part 2 I will configure Gluster to use TLS/SSL to encrypt the transport between the Gluster nodes.