Wednesday, January 14, 2015

Simple Clustering on CentOS 7/RHEL 7 for an haproxy Load Balancer

Here are the steps to get a simple cluster going. We're not going to share storage, so the quorom isn't going to work. We'll disable stonith and quorom. 

pre-req: add hosts file entries for all nodes on all nodes, or at least make sure DNS is working correctly. You might receive errors like:

Error: unable to get crm_config, is pacemaker running?

yum install pcs fence-agents-all -y
firewall-cmd --permanent --add-service=high-availability

# set the password for the hacluster user - it should probably be the same on all nodes
passwd hacluster

# disable haproxy, as the cluster will start it

systemctl disable haproxy


# enable the services
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
systemctl start pcsd.service


### we're not going to have a stonith nor a quorom
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore


# check to see if it's alive
systemctl is-active pcsd.service
pcs cluster auth `hostname`
pcs cluster setup --start --name myclustername `hostname`
pcs cluster status

# add an IP as a resource
pcs resource create vip1 IPaddr2 ip=172.29.23.80 cidr_netmask=22 --group haproxy


to add an additional node called "mynode2"
authorize it on the master:

 pcs cluster auth mynode2

(authenticate using "hacluster" user)

add it:

pcs cluster node add mynode2


You'll need to start the other node:

pcs cluster start mynode2

To see the status of the nodes:


pc status nodes

example:
pcs status nodes
Pacemaker Nodes:
 Online: mynode1
 Standby:
 Offline: mynode2


Now we add haproxy. Since the haproxy service wouldn't be too useful without the IP address, we'll set up a colocation rule as well.

pcs resource create HAproxy --group haproxy  systemd:haproxy op monitor interval=10s
pcs constraint colocation add HAproxy vip1
pcs constraint order vip1 then HAproxy


# optional - we want the cluster to "favor" mynode1.
# if mynode1 is restarted, for example, mynode2 will get the resources, 
# until mynode1 is back and running
pcs constraint location vip11 prefers mynode1 


We'll want to turn off haproxy in systemd as the cluster will start it:

systemctl disable haproxy 







4 comments:

tunca said...

Thanks for the nodes,
I'm trying to setup2 haproxy in active-passive configuration with two nodes
lb1 and lb2

Should we run those commands on both lb1 and lb2 boxes?
Thanks

Rivald said...

It's been a while. I think you only run the set on the first host at first, but I'll have to test. I'd run it on the first host to start with.

Unknown said...

the only commands you run on BOTH are:

systemctl start pcsd.service
systemctl enable pcsd.service

you also need to set the hacluster password on both

otherwise all the pcs commands are only run from one node and it will sync them (thats why pcsd runs on both)

Anonymous said...

I have a haproxy on centos7, it is a virtual machine on a vShere6.5 environment, and I am searching for methods to create a 2nd VM and have it also as haproxy. So I found this article. It seems useful but here are some questions I have.
1) I understand that yum install pcs will also install corosync and pacemaker as dependencies, but what is the need for fence-agents-all ? In my case , since I only hav VMs, can I skip these fence.... packages and not install them?
2) On your command :
pcs resource create vip1 IPaddr2 ip=172.29.23.80 cidr_netmask=22 --group haproxy
what is the 172.29.23.80 ? A non-active (not bound to an existing network interface) IP that will be the "floating" IP of both nodes?
And why do you type IPaddr2 ? Is this name special , and should be used as is? Or we can type IPaddr1? I looked at the man pages of pcs, and could understand. They give an example:
# pcs resource create VirtualIP IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
and I notice they use nic=eth2. I suppose in your example both nodes have only 1 network-card. That is what I have in my 2 VMs anyway. So why did you used in your example IPaddr2 and not IPaddr1 or even IPaddr3 or whatever?
3)I suppose at the end when you have the command:
pcs constraint location vip11 prefers mynode1
it is a typo mistake and should be:
pcs constraint location vip1 prefers mynode1