Assuming you're using the default zone of "public" (you may need to temporarily disable selinux (setenforce 0)):
1. To allow everyone to access port 8080/tcp:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
2. Allow a server from the IPv4 address 10.20.30.40 to access this server on port 1234 over UDP:
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="10.20.30.40/32" port port="1234" protocol="udp" accept' --permanent
Tuesday, January 20, 2015
RedHat/CentOS error: connection activation failed: connection 'x' is not available on the device y
You'll see this error using nmcli if you try to bring up a connection in the nmcli that references a NIC that is disconnected/unplugged - either unplugged from a baremetal server or disconnected in VMware. Here are some examples of what it might look like:
connection activation failed: connection 'ethernet' is not available on the device ens32
connection activation failed: connection 'eth0' is not available on the device ens192
etc.
The fix is really as simple as connecting the cable or re-enabling the NIC in VMware and running the nmcli command again.
connection activation failed: connection 'ethernet' is not available on the device ens32
connection activation failed: connection 'eth0' is not available on the device ens192
etc.
The fix is really as simple as connecting the cable or re-enabling the NIC in VMware and running the nmcli command again.
Wednesday, January 14, 2015
Simple Clustering on CentOS 7/RHEL 7 for an haproxy Load Balancer
Here are the steps to get a simple cluster going. We're not going to share storage, so the quorom isn't going to work. We'll disable stonith and quorom.
pre-req: add hosts file entries for all nodes on all nodes, or at least make sure DNS is working correctly. You might receive errors like:
Error: unable to get crm_config, is pacemaker running?
yum install pcs fence-agents-all -y
firewall-cmd --permanent --add-service=high-availability
# set the password for the hacluster user - it should probably be the same on all nodes
passwd hacluster
# disable haproxy, as the cluster will start it
systemctl disable haproxy
# enable the services
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
systemctl start pcsd.service
### we're not going to have a stonith nor a quorom
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
# check to see if it's alive
systemctl is-active pcsd.service
pcs cluster auth `hostname`
pcs cluster setup --start --name myclustername `hostname`
pcs cluster status
# add an IP as a resource
pcs resource create vip1 IPaddr2 ip=172.29.23.80 cidr_netmask=22 --group haproxy
to add an additional node called "mynode2"
authorize it on the master:
pcs cluster auth mynode2
(authenticate using "hacluster" user)
add it:
pcs cluster node add mynode2
You'll need to start the other node:
pcs cluster start mynode2
To see the status of the nodes:
pc status nodes
example:
pcs status nodes
Pacemaker Nodes:
Online: mynode1
Standby:
Offline: mynode2
Now we add haproxy. Since the haproxy service wouldn't be too useful without the IP address, we'll set up a colocation rule as well.
pcs resource create HAproxy --group haproxy systemd:haproxy op monitor interval=10s
pcs constraint colocation add HAproxy vip1
pcs constraint order vip1 then HAproxy
# optional - we want the cluster to "favor" mynode1.
# if mynode1 is restarted, for example, mynode2 will get the resources,
# until mynode1 is back and running
pcs constraint location vip11 prefers mynode1
We'll want to turn off haproxy in systemd as the cluster will start it:
systemctl disable haproxy
pre-req: add hosts file entries for all nodes on all nodes, or at least make sure DNS is working correctly. You might receive errors like:
Error: unable to get crm_config, is pacemaker running?
yum install pcs fence-agents-all -y
firewall-cmd --permanent --add-service=high-availability
# set the password for the hacluster user - it should probably be the same on all nodes
passwd hacluster
# disable haproxy, as the cluster will start it
systemctl disable haproxy
# enable the services
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
systemctl start pcsd.service
### we're not going to have a stonith nor a quorom
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
# check to see if it's alive
systemctl is-active pcsd.service
pcs cluster auth `hostname`
pcs cluster setup --start --name myclustername `hostname`
pcs cluster status
# add an IP as a resource
pcs resource create vip1 IPaddr2 ip=172.29.23.80 cidr_netmask=22 --group haproxy
to add an additional node called "mynode2"
authorize it on the master:
pcs cluster auth mynode2
(authenticate using "hacluster" user)
add it:
pcs cluster node add mynode2
You'll need to start the other node:
pcs cluster start mynode2
To see the status of the nodes:
pc status nodes
example:
pcs status nodes
Pacemaker Nodes:
Online: mynode1
Standby:
Offline: mynode2
Now we add haproxy. Since the haproxy service wouldn't be too useful without the IP address, we'll set up a colocation rule as well.
pcs resource create HAproxy --group haproxy systemd:haproxy op monitor interval=10s
pcs constraint colocation add HAproxy vip1
pcs constraint order vip1 then HAproxy
# optional - we want the cluster to "favor" mynode1.
# if mynode1 is restarted, for example, mynode2 will get the resources,
# until mynode1 is back and running
pcs constraint location vip11 prefers mynode1
We'll want to turn off haproxy in systemd as the cluster will start it:
systemctl disable haproxy
CentOS 7 error: Connection 'wired connection 1' is not available on the device ens32 at this time.
This error is presented when attempting to bring up a network connection using nmcli. The connection name can be anything, of course.
There is a RedHat bug on this titled "RHEL 7 syslog shows failure related to network.service" bug #1079353.
There is no fix listed in the bug. I've seen this on a virtual machine, and the easiest work around was to delete the virtual machine's NIC, reboot, add a new NIC - reboot. After doing that, I deleted the new network connection using nmcli and applied the new device to the original connection profile I was attempting to use.
There is a RedHat bug on this titled "RHEL 7 syslog shows failure related to network.service" bug #1079353.
There is no fix listed in the bug. I've seen this on a virtual machine, and the easiest work around was to delete the virtual machine's NIC, reboot, add a new NIC - reboot. After doing that, I deleted the new network connection using nmcli and applied the new device to the original connection profile I was attempting to use.
Tuesday, January 13, 2015
RHEL/CentOS nmcli Tips
You may need to disable selinux temporarily to make changes to these files (setenforce 0)
(note that many nmcli commands will fail if the underlying device is not active (i.e., disconnected in VMware))
1. List connections
nmcli c show
2. rename an connection id called "outside" - change it to eth0
nmcli c modify outside connection.id eth0
3. change a nic (with a connection name of "ethernet" and a device name of ens32) to static and assign an address, gw, dns, etc. (172.19.22.1 is the default gateway. Separate additional addresses with commas, leaving a space before the default gateway.)
nmcli c modify ethernet connection.interface-name ens32 ipv4.method static ipv4.addresses "172.19.22.3/24 172.19.22.1" ipv4.dns 172.19.22.10,172.19.22.11 ipv4.dns-search mydomain.local
4. Bring up your new connection:
nmcli con up ethernet
5. delete a connection called "wired":
nmcli con delete wired
6. create a new connection (called "eth0") using an ethernet device called "ens32":
nmcli con add type ethernet con-name eth0 ifname ens32
7. change the hostname:
nmcli general hostname new_hostname
and restart hostnamed to pick up the change (your shell prompt won't change until you exec a new shell or reboot):
systemctl restart systemd-hostnamed
(note that many nmcli commands will fail if the underlying device is not active (i.e., disconnected in VMware))
1. List connections
nmcli c show
2. rename an connection id called "outside" - change it to eth0
nmcli c modify outside connection.id eth0
3. change a nic (with a connection name of "ethernet" and a device name of ens32) to static and assign an address, gw, dns, etc. (172.19.22.1 is the default gateway. Separate additional addresses with commas, leaving a space before the default gateway.)
nmcli c modify ethernet connection.interface-name ens32 ipv4.method static ipv4.addresses "172.19.22.3/24 172.19.22.1" ipv4.dns 172.19.22.10,172.19.22.11 ipv4.dns-search mydomain.local
4. Bring up your new connection:
nmcli con up ethernet
5. delete a connection called "wired":
nmcli con delete wired
6. create a new connection (called "eth0") using an ethernet device called "ens32":
nmcli con add type ethernet con-name eth0 ifname ens32
7. change the hostname:
nmcli general hostname new_hostname
and restart hostnamed to pick up the change (your shell prompt won't change until you exec a new shell or reboot):
systemctl restart systemd-hostnamed
Subscribe to:
Posts (Atom)