Wednesday, January 14, 2015

Simple Clustering on CentOS 7/RHEL 7 for an haproxy Load Balancer

Here are the steps to get a simple cluster going. We're not going to share storage, so the quorom isn't going to work. We'll disable stonith and quorom. 

pre-req: add hosts file entries for all nodes on all nodes, or at least make sure DNS is working correctly. You might receive errors like:

Error: unable to get crm_config, is pacemaker running?

yum install pcs fence-agents-all -y
firewall-cmd --permanent --add-service=high-availability

# set the password for the hacluster user - it should probably be the same on all nodes
passwd hacluster

# disable haproxy, as the cluster will start it

systemctl disable haproxy


# enable the services
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
systemctl start pcsd.service


### we're not going to have a stonith nor a quorom
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore


# check to see if it's alive
systemctl is-active pcsd.service
pcs cluster auth `hostname`
pcs cluster setup --start --name myclustername `hostname`
pcs cluster status

# add an IP as a resource
pcs resource create vip1 IPaddr2 ip=172.29.23.80 cidr_netmask=22 --group haproxy


to add an additional node called "mynode2"
authorize it on the master:

 pcs cluster auth mynode2

(authenticate using "hacluster" user)

add it:

pcs cluster node add mynode2


You'll need to start the other node:

pcs cluster start mynode2

To see the status of the nodes:


pc status nodes

example:
pcs status nodes
Pacemaker Nodes:
 Online: mynode1
 Standby:
 Offline: mynode2


Now we add haproxy. Since the haproxy service wouldn't be too useful without the IP address, we'll set up a colocation rule as well.

pcs resource create HAproxy --group haproxy  systemd:haproxy op monitor interval=10s
pcs constraint colocation add HAproxy vip1
pcs constraint order vip1 then HAproxy


# optional - we want the cluster to "favor" mynode1.
# if mynode1 is restarted, for example, mynode2 will get the resources, 
# until mynode1 is back and running
pcs constraint location vip11 prefers mynode1 


We'll want to turn off haproxy in systemd as the cluster will start it:

systemctl disable haproxy 







3 comments:

tunca said...

Thanks for the nodes,
I'm trying to setup2 haproxy in active-passive configuration with two nodes
lb1 and lb2

Should we run those commands on both lb1 and lb2 boxes?
Thanks

Rivald said...

It's been a while. I think you only run the set on the first host at first, but I'll have to test. I'd run it on the first host to start with.

Jaqueline Loriault said...

the only commands you run on BOTH are:

systemctl start pcsd.service
systemctl enable pcsd.service

you also need to set the hacluster password on both

otherwise all the pcs commands are only run from one node and it will sync them (thats why pcsd runs on both)