Enswitch storage on NFS with DRBD and Heartbeat
Disclaimer
The following comes with no warranty whatsoever. I am not responsible for any data loss or other issues that may arise from following these instructions. Please make backups of all files and test this thoroughly in your lab environment before using it in production.
Overview
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.
The procedure has been tested on Enswitch 3.11, but should work on most other versions. The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.
The servers are as follows:
enswitchnfs0 - current active NFS server
enswitchnfs1 - current backup NFS server
enswitchstorage0 - New NFS server 0
enswitchstorage1 - New NFS server 1
The Enswitch subnet is 10.0.0.0/24
Server configuration
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit. Make a partition for the OS and leave the rest of the disk empty for the DRBD volume. Do not create a swap partition, a swap file will be added later.
Update all OS packages on enswitchstorage0 and enswitchstorage1:
sudo apt-get update sudo apt-get dist-upgrade sudo apt-get autoremove sudo init 6
Create swap file:
sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048 sudo chmod 0600 /swapfile0 sudo mkswap /swapfile0
Add the following line to /etc/fstab:
/swapfile0 none swap sw 0 0
Enable swap file:
sudo swapon -a
Install additional software on enswitchstorage0 and enswitchstorage1:
sudo apt-get install ntp
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):
sudo apt-get install htop iotop bwm-ng tshark
Add firewall rules on enswitchstorage0 and enswitchstorage1. The following can be used as the base for a firewall script:
iptables -F -m comment --comment "Clear all existing rules" iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment "Allow packets from related and established connections" -j ACCEPT iptables -A INPUT -i lo -m comment --comment "Allow all on lo interface" -j ACCEPT iptables -A INPUT -s 10.0.0.0/24 -m comment --comment "Allow everything from Enswitch subnet" -j ACCEPT iptables -A INPUT -m comment --comment "Log all unmatched packets" -j LOG iptables -A INPUT -m comment --comment "Drop all unmatched packets" -j DROP
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:
10.0.0.122 enswitchstorage0 10.0.0.123 enswitchstorage1
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:
sudo fdisk
Install DRDB utilities:
sudo apt-get install drbd8-utils
Create /etc/drbd.conf on both servers with the following contents:
global { usage-count no; } common { syncer { rate 100M; } } resource drbd0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "secret"; } on enswitchstorage0 { device /dev/drbd-enswitch; disk /dev/sdb2; address 10.196.11.200:7788; meta-disk internal; } on enswitchstorage1 { device /dev/drbd-enswitch; disk /dev/sdb2; address 10.196.11.201:7788; meta-disk internal; } }
Create volume on both servers:
sudo drbdadm create-md drbd-enswitch sudo service drbd start
Initialize volume, run the following on the primary server, in this case enswitchstorage0. You can look at /proc/drbd to see the status of the rebuild:
sudo drbdadm -- --overwrite-data-of-peer primary all
Once the sync is complete, create a filesystem on /dev/drbd0
sudo mkfs.ext4 /dev/drbd-enswitch
Install heartbeat:
sudo apt-get install heartbeat
Configure heartbeat:
Install NFS server:
sudo apt-get install nfs-kernel-server
References:
https://help.ubuntu.com/community/NFSv4Howto
https://help.ubuntu.com/lts/serverguide/drbd.html https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat
https://help.ubuntu.com/community/HighlyAvailableNFS