Create High-Available NFS Server with Pacemaker

Notes

Read first Installing Pacemaker and Corosync and Create High-Available DRBD Device with Pacemaker.

Installation

Remove nfs-kernel-server and nfs-common from init.d on lb1 and lb2



Move the pipefs working directory, this directory should not be on the shared drbd device on lb1 and lb2

Mount drbd device and move nfs directory on it on lb1

Remove nfs directory on lb2

Create symbolic link for the nfs directory on lb1 and lb2

Make sure that the permission are right on lb1 and lb2

Edit the nfs-common init script to configure our new rpc_pipefs directory on lb1 and lb2

Also in the id mapping daemon config on lb1 and lb2

Edit nfs-common to bind rpc ports to fixed value on node1 and node2 (192.168.33.190 is our virtual ip address that we will setup later)

Also in nfs-kernel-server on lb1 and lb2

We need also to configure fixed ports for the rpc lock daemon module on lb1 and lb2 this change needs a reboot

Configuration

Create some exports on lb1

Copy export file from lb1 to lb2

Ingeration in Pacemaker

Use the Cluster Resource Manager tool again to add the nfs resources on lb1 or lb2

Go into the configuration section

Add nfs-kernel-server and nfs-common resources

Update our services group, You need to type edit on the configuration section and add resNFScommon and resNFSserver manualy, I don’t find a way to alter the group from crm command line.

Commit changes

Exit configuration section

Check status on lb1 or lb2

One response to Create High-Available NFS Server with Pacemaker

Comments are closed.