<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>http://wiki.integrics.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Danthony</id>
	<title>Integrics Wiki - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.integrics.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Danthony"/>
	<link rel="alternate" type="text/html" href="http://wiki.integrics.com/wiki/Special:Contributions/Danthony"/>
	<updated>2026-05-06T18:44:15Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.7</generator>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=205</id>
		<title>Changing Enswitch UID and GID</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=205"/>
		<updated>2018-11-12T19:47:12Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
It may be necessary to change the UID and GID of the enswitch user/group on an existing system.  In our case the enswitch user was UID 100 and the enswitch group was GID 101, which caused us to have to renumber the existing 100 user and 101 group on every new server install.&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
This is provided as-is with no warranty and may not work correctly on every Enswitch system.  Make sure you have proper backups and test the procedure in a non-production environment.  Neither I nor Integrics are responsible for any problems arising from the use of these intstructions.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
&lt;br /&gt;
=== Variables ===&lt;br /&gt;
&lt;br /&gt;
In this example, the enswitch UID is 100 and will be changed to 900, and the enswitch GID is 101 and will change to 901.&lt;br /&gt;
&lt;br /&gt;
=== Identify files owned by enswitch user and group ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 and save them to a text file.&lt;br /&gt;
&lt;br /&gt;
For NFS servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
For all other servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
=== Stop all services using enswitch user/group ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 stop&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk stop&lt;br /&gt;
 /etc/init.d/enswitch_routed stop&lt;br /&gt;
 /etc/init.d/hylafax stop&lt;br /&gt;
 chown -h enswitch:enswitch /var/spool/asterisk/voicemail&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged stop&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
=== Change enswitch UID and GID ===&lt;br /&gt;
&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
 groupmod -g 901 enswitch&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
 usermod -u 900 -g 901 enswitch&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
&lt;br /&gt;
=== Change file ownership ===&lt;br /&gt;
&lt;br /&gt;
Change ownership on all files that reside on the local disk on each server. Start this on the NFS boxes first because they will take the longest.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
Once this has finished on all servers, move on to the next step&lt;br /&gt;
&lt;br /&gt;
=== Start services ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk start&lt;br /&gt;
 /etc/init.d/enswitch_routed start&lt;br /&gt;
 /etc/init.d/hylafax start&lt;br /&gt;
&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged start&lt;br /&gt;
&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart Enswitch on all other servers&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
=== Re-check file ownership ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 that may have been created after the initial check:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files_2.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files_2.txt&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If any files are found, change ownership on them, then stop and start all services again.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files_2.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files_2.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
=== Re-start cron ===&lt;br /&gt;
&lt;br /&gt;
Restart cron on all boxes. I had an issue where enswitch_cdrs_archive, enswitch_cdrs_delete and other Enswitch cron jobs did not run after the change. Apparently cron caches the user to uid and group to gid mapping. Restarting cron fixed the issue.&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=204</id>
		<title>Changing Enswitch UID and GID</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=204"/>
		<updated>2018-11-12T19:46:30Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
It may be necessary to change the UID and GID of the enswitch user/group on an existing system.  In our case the enswitch user was UID 100 and the enswitch group was GID 101, which caused us to have to renumber the existing 100 user and 101 group on every new server install.&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
This is provided as-is with no warranty and may not work correctly on every Enswitch system.  Make sure you have proper backups and test the procedure in a non-production environment.  Neither I nor Integrics are responsible for any problems arising from the use of these intstructions.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
&lt;br /&gt;
=== Variables ===&lt;br /&gt;
&lt;br /&gt;
In this example, the enswitch UID is 100 and will be changed to 900, and the enswitch GID is 101 and will chaneg to 901.&lt;br /&gt;
&lt;br /&gt;
=== Identify files owned by enswitch user and group ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 and save them to a text file.&lt;br /&gt;
&lt;br /&gt;
For NFS servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
For all other servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
=== Stop all services using enswitch user/group ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 stop&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk stop&lt;br /&gt;
 /etc/init.d/enswitch_routed stop&lt;br /&gt;
 /etc/init.d/hylafax stop&lt;br /&gt;
 chown -h enswitch:enswitch /var/spool/asterisk/voicemail&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged stop&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
=== Change enswitch UID and GID ===&lt;br /&gt;
&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
 groupmod -g 901 enswitch&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
 usermod -u 900 -g 901 enswitch&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
&lt;br /&gt;
=== Change file ownership ===&lt;br /&gt;
&lt;br /&gt;
Change ownership on all files that reside on the local disk on each server. Start this on the NFS boxes first because they will take the longest.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
Once this has finished on all servers, move on to the next step&lt;br /&gt;
&lt;br /&gt;
=== Start services ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk start&lt;br /&gt;
 /etc/init.d/enswitch_routed start&lt;br /&gt;
 /etc/init.d/hylafax start&lt;br /&gt;
&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged start&lt;br /&gt;
&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart Enswitch on all other servers&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
=== Re-check file ownership ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 that may have been created after the initial check:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files_2.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files_2.txt&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If any files are found, change ownership on them, then stop and start all services again.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files_2.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files_2.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
=== Re-start cron ===&lt;br /&gt;
&lt;br /&gt;
Restart cron on all boxes. I had an issue where enswitch_cdrs_archive, enswitch_cdrs_delete and other Enswitch cron jobs did not run after the change. Apparently cron caches the user to uid and group to gid mapping. Restarting cron fixed the issue.&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=203</id>
		<title>Changing Enswitch UID and GID</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=203"/>
		<updated>2018-11-12T19:45:31Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
It may be necessary to change the UID and GID of the enswitch user/group on an existing system.  In our case the enswitch user was UID 100 and the enswitch group was GID 101, which caused us to have to renumber the existing 100 user and 101 group on every new server install.&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
This is provided as-is with no warranty and may not work correctly on every Enswitch system.  Make sure you have proper backups and test the procedure in a non-production environment.  Neither I nor Integrics are responsible for any problems arising from the use of these intstructions.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
&lt;br /&gt;
=== Variables ===&lt;br /&gt;
&lt;br /&gt;
In this example, the enswitch UID is 100 and will be changed to 900, and the enswitch GID is 101 and will chaneg to 901.&lt;br /&gt;
&lt;br /&gt;
=== Identify files owned by enswitch user and group ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 and save them to a text file.&lt;br /&gt;
&lt;br /&gt;
For NFS servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
For all other servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
=== Stop all services using enswitch user/group ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 stop&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk stop&lt;br /&gt;
 /etc/init.d/enswitch_routed stop&lt;br /&gt;
 /etc/init.d/hylafax stop&lt;br /&gt;
 chown -h enswitch:enswitch /var/spool/asterisk/voicemail&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged stop&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
=== Change enswitch UID and GID ===&lt;br /&gt;
&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
 groupmod -g 901 enswitch&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
 usermod -u 900 -g 901 enswitch&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
&lt;br /&gt;
=== Change file ownership ===&lt;br /&gt;
&lt;br /&gt;
Change ownership on all files that reside on the local disk on each server. Start this on the NFS boxes first because they will take the longest.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
Once this has finished on all servers, move on to the next step&lt;br /&gt;
&lt;br /&gt;
=== Start services ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk start&lt;br /&gt;
 /etc/init.d/enswitch_routed start&lt;br /&gt;
 /etc/init.d/hylafax start&lt;br /&gt;
&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged start&lt;br /&gt;
&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart Enswitch on all other servers&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
=== Re-check ownership ==&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 that may have been created after the initial check:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files_2.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files_2.txt&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If any files are found, change ownership on them, then stop and start all services again.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files_2.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files_2.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
=== Re-start cron ===&lt;br /&gt;
&lt;br /&gt;
Restart cron on all boxes. I had an issue where enswitch_cdrs_archive, enswitch_cdrs_delete and other Enswitch cron jobs did not run after the change. Apparently cron caches the user to uid and group to gid mapping. Restarting cron fixed the issue.&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=202</id>
		<title>Changing Enswitch UID and GID</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Changing_Enswitch_UID_and_GID&amp;diff=202"/>
		<updated>2018-11-12T19:44:28Z</updated>

		<summary type="html">&lt;p&gt;Danthony: Created page with &amp;quot;== Overview ==  It may be necessary to change the UID and GID of the enswitch user/group on an existing system.  In our case the enswitch user was UID 100 and the enswitch gro...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
It may be necessary to change the UID and GID of the enswitch user/group on an existing system.  In our case the enswitch user was UID 100 and the enswitch group was GID 101, which caused us to have to renumber the existing 100 user and 101 group on every new server install.&lt;br /&gt;
&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
This is provided as-is with no warranty and may not work correctly on every Enswitch system.  Make sure you have proper backups and test the procedure in a non-production environment.  Neither I nor Integrics are responsible for any problems arising from the use of these intstructions.&lt;br /&gt;
&lt;br /&gt;
== Procedure ==&lt;br /&gt;
&lt;br /&gt;
=== Variables ===&lt;br /&gt;
&lt;br /&gt;
In this example, the enswitch UID is 100 and will be changed to 900, and the enswitch GID is 101 and will chaneg to 901.&lt;br /&gt;
&lt;br /&gt;
=== Identify files owned by enswitch user and group ===&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 and save them to a text file.&lt;br /&gt;
&lt;br /&gt;
For NFS servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
For all other servers:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files.txt&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Stop all services using enswitch user/group ===&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 stop&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk stop&lt;br /&gt;
 /etc/init.d/enswitch_routed stop&lt;br /&gt;
 /etc/init.d/hylafax stop&lt;br /&gt;
 chown -h enswitch:enswitch /var/spool/asterisk/voicemail&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd stop&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged stop&lt;br /&gt;
 sleep 2&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
=== Change enswitch UID and GID ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
 groupmod -g 901 enswitch&lt;br /&gt;
 getent group enswitch&lt;br /&gt;
&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
 usermod -u 900 -g 901 enswitch&lt;br /&gt;
 getent passwd enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Change file ownership ===&lt;br /&gt;
&lt;br /&gt;
Change ownership on all files that reside on the local disk on each server. Start this on the NFS boxes first because they will take the longest.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once this has finished on all servers, move on to the next step&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Start services ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Web (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/apache2 start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Asterisk:&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/asterisk start&lt;br /&gt;
 /etc/init.d/enswitch_routed start&lt;br /&gt;
 /etc/init.d/hylafax start&lt;br /&gt;
&lt;br /&gt;
 pgrep -lf asterisk&lt;br /&gt;
 pgrep -lf enswitch_routed&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kamailio (Active only):&lt;br /&gt;
&lt;br /&gt;
 /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
 /etc/ha.d/resource.d/rtpengine start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_sipd start&lt;br /&gt;
 /etc/ha.d/resource.d/enswitch_messaged start&lt;br /&gt;
&lt;br /&gt;
 pgrep -alf enswitch_sipd&lt;br /&gt;
 pgrep -alf enswitch_messaged&lt;br /&gt;
 pgrep -alf kamailio&lt;br /&gt;
 pgrep -alf rtpengine&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart Enswitch on all other servers&lt;br /&gt;
&lt;br /&gt;
 enswitch restart&lt;br /&gt;
&lt;br /&gt;
=== Re-check ownership ==&lt;br /&gt;
&lt;br /&gt;
On each Enswitch server, find files owned by uid 100 or gid 101 that may have been created after the initial check:&lt;br /&gt;
&lt;br /&gt;
 time find / -user 100 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_user_files_2.txt&lt;br /&gt;
 time find / -group 101 | egrep -v &amp;quot;\/var\/lib\/enswitch\/|\/proc|\/dev\/&amp;quot; &amp;gt; /tmp/enswitch_group_files_2.txt&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If any files are found, change ownership on them, then stop and start all services again.&lt;br /&gt;
&lt;br /&gt;
 time cat /tmp/enswitch_user_files_2.txt | xargs --max-args=1000 chown enswitch&lt;br /&gt;
 time cat /tmp/enswitch_group_files_2.txt | xargs --max-args=1000 chgrp enswitch&lt;br /&gt;
&lt;br /&gt;
=== Re-start cron ===&lt;br /&gt;
&lt;br /&gt;
Restart cron on all boxes. I had an issue where enswitch_cdrs_archive, enswitch_cdrs_delete and other Enswitch cron jobs did not run after the change. Apparently cron caches the user to uid and group to gid mapping. Restarting cron fixed the issue.&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=138</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=138"/>
		<updated>2015-10-30T18:10:45Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*database0 - Main active database server&lt;br /&gt;
*database1 - Main standby database server&lt;br /&gt;
*cdrdatabase0 - Server where CDRs will be archived&lt;br /&gt;
*cdrdatabase1 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Firewall configuration ===&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on cdrdatabase0 and cdrdatabase1.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   cdrdatabase0&lt;br /&gt;
 10.1.0.89   cdrdatabase1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 cdrdatabase0 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase1&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase0&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on cdrdatabase0:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on cdrdatabase1:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on cdrdatabase0: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on database0 for each of the 3 cdr* tables and run these on cdrdatabase0.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on cdrdatabase0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=137</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=137"/>
		<updated>2015-10-30T18:04:31Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*database0 - Main active database server&lt;br /&gt;
*database1 - Main standby database server&lt;br /&gt;
*cdrdatabase0 - Server where CDRs will be archived&lt;br /&gt;
*cdrdatabase1 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on cdrdatabase0 and cdrdatabase1.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   cdrdatabase0&lt;br /&gt;
 10.1.0.89   cdrdatabase1&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 cdrdatabase0 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase1&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase0&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on cdrdatabase0:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on cdrdatabase1:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on cdrdatabase0: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on database0 for each of the 3 cdr* tables and run these on cdrdatabase0.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on cdrdatabase0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=136</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=136"/>
		<updated>2015-10-30T18:04:13Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*database0 - Main active database server&lt;br /&gt;
*database1 - Main standby database server&lt;br /&gt;
*cdrdatabase0 - Server where CDRs will be archived&lt;br /&gt;
*cdrdatabase1 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on cdrdatabase0 and cdrdatabase1.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   cdrdatabase0&lt;br /&gt;
 10.1.0.89   cdrdatabase1&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 cdrdatabase0 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase1&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase0&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on cdrdatabase0:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on cdrdatabase1:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on cdrdatabase0: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on database0 for each of the 3 cdr* tables and run these on cdrdatabase0.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on cdrdatabase0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=135</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=135"/>
		<updated>2015-10-30T17:00:02Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*database0 - Main active database server&lt;br /&gt;
*database1 - Main standby database server&lt;br /&gt;
*cdrdatabase0 - Server where CDRs will be archived&lt;br /&gt;
*cdrdatabase1 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on cdrdatabase0 and cdrdatabase1.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on cdrdatabase0 and cdrdatabase1: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   cdrdatabase0&lt;br /&gt;
 10.1.0.89   cdrdatabase1&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 cdrdatabase0 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node cdrdatabase0&lt;br /&gt;
 node cdrdatabase1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on cdrdatabase0 and cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase0:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase1&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On cdrdatabase1:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from cdrdatabase0&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on cdrdatabase0:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on cdrdatabase1:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on cdrdatabase0: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on database0 for each of the 3 cdr* tables and run these on cdrdatabase0.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on cdrdatabase0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=134</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=134"/>
		<updated>2015-10-30T16:52:50Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Create tables: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*enswitchdb00 - Main active database server&lt;br /&gt;
*enswitchdb01 - Main standby database server&lt;br /&gt;
*enswitchcdrdb00 - Server where CDRs will be archived&lt;br /&gt;
*enswitchcdrdb01 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on enswitchcdrdb00 and enswitchcdrdb01.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   enswitchcdrdb00&lt;br /&gt;
 10.1.0.89   enswitchcdrdb01&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 enswitchcdrdb00 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchcdrdb00&lt;br /&gt;
 node enswitchcdrdb01&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchcdrdb00&lt;br /&gt;
 node enswitchcdrdb01&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from enswitchcdrdb01&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from enswitchcdrdb00&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on enswitchcdrdb00:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on enswitchcdrdb01:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on enswitchcdrdb00: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on enswitchdb00 for each of the 3 cdr* tables and run these on enswitchcdrdb00.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on enswitchcdrdb00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=133</id>
		<title>Archiving CDRs to a remote MySQL Master/Master pair</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Archiving_CDRs_to_a_remote_MySQL_Master/Master_pair&amp;diff=133"/>
		<updated>2015-10-30T16:49:44Z</updated>

		<summary type="html">&lt;p&gt;Danthony: Created page with &amp;quot;= Disclaimer =   The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Pl...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Disclaimer =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Info =&lt;br /&gt;
&lt;br /&gt;
This document is a companion to the official documentation from Integrics at http://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
Some of the commands are borrowed from the above page and from other Integrics documentation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Enswitch version used is 3.11, and the servers run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
The servers used in this example are as follows:&lt;br /&gt;
&lt;br /&gt;
*enswitchdb00 - Main active database server&lt;br /&gt;
*enswitchdb01 - Main standby database server&lt;br /&gt;
*enswitchcdrdb00 - Server where CDRs will be archived&lt;br /&gt;
*enswitchcdrdb01 - Server where CDRs will be archived&lt;br /&gt;
&lt;br /&gt;
= OS install =&lt;br /&gt;
&lt;br /&gt;
Load servers with the same OS as the current Enswitch database servers, in this example I use Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
=== Update all OS packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install an appropriate firewall, this is out of the scope of this document, but an example may be added later.&lt;br /&gt;
&lt;br /&gt;
=== Install optional packages: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark&lt;br /&gt;
&lt;br /&gt;
= Database install and configuration =&lt;br /&gt;
&lt;br /&gt;
=== Install MySQL on enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install mysql-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure master/master replication between enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable remote connections to MySQL on enswitchcdrdb00 and enswitchcdrdb01.  In /etc/mysql.my.cnf, change the bind-address variable to 0.0.0.0.&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable binary logging on enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 log_bin = /var/lib/mysql/mysql-bin.log&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to limit replication to only the enswitch database:&lt;br /&gt;
&lt;br /&gt;
 replicate-do-db = enswitch&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 server-id = 10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 server-id = 11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following configuration options to /etc/mysql/my.cnf under [mysqld] to enable a single file per table:&lt;br /&gt;
&lt;br /&gt;
 innodb_file_per_table&lt;br /&gt;
&lt;br /&gt;
Restart MySQL:&lt;br /&gt;
&lt;br /&gt;
 sudo service mysql restart&lt;br /&gt;
&lt;br /&gt;
Configure replication&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.89 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
Add a replicate user on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 grant super, replication client, replication slave, reload on *.* to replicate@10.1.0.88 identified by 'PASSWORD';&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.88', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01, display the current log file and position:&lt;br /&gt;
&lt;br /&gt;
 show master status\G&lt;br /&gt;
&lt;br /&gt;
Insert the appropriate values instead of LOGFILE and POSITION below, and then run the command in mysql on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 change master to master_host='10.1.0.89', master_user='replicate', master_password='PASSWORD', master_log_file='LOGFILE', master_log_pos=POSITION;&lt;br /&gt;
&lt;br /&gt;
Start the slave process on enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 slave start;&lt;br /&gt;
&lt;br /&gt;
= Install and configure Heartbeat =&lt;br /&gt;
&lt;br /&gt;
=== Add entries to /etc/hosts for each server on enswitchcdrdb00 and enswitchcdrdb01: ===&lt;br /&gt;
&lt;br /&gt;
 10.1.0.88   enswitchcdrdb00&lt;br /&gt;
 10.1.0.89   enswitchcdrdb01&lt;br /&gt;
&lt;br /&gt;
=== Install heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configure heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 enswitchcdrdb00 IPaddr::10.1.0.93/26/eth0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.89&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchcdrdb00&lt;br /&gt;
 node enswitchcdrdb01&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 700&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.1.0.88&lt;br /&gt;
 ping 10.1.0.65&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchcdrdb00&lt;br /&gt;
 node enswitchcdrdb01&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchcdrdb00 and enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 secret&lt;br /&gt;
 3 md5 dhcp&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rule allow heartbeat.  This goes in /etc/firewall.sh anywhere after the &amp;quot;iptables -A INPUT -i lo -j ACCEPT&amp;quot; line and before &amp;quot;iptables -A INPUT -j LOG&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb00:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.89 -m comment --comment &amp;quot;Allow heartbeat from enswitchcdrdb01&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
On enswitchcdrdb01:&lt;br /&gt;
&lt;br /&gt;
 iptables -t filter -A INPUT -p udp --dport 700 -s 10.1.0.88 -m comment --comment &amp;quot;Allow heartbeat from enswitchcdrdb00&amp;quot; -j ACCEPT&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply new firewall rules:&lt;br /&gt;
&lt;br /&gt;
 sudo sh /etc/firewall.sh&lt;br /&gt;
&lt;br /&gt;
=== Start Heartbeat: ===&lt;br /&gt;
&lt;br /&gt;
Start heartbeat on enswitchcdrdb00:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
Wait until you see the 10.1.0.93 IP address on eth0:0 and drbd0 volume mounted at /mnt/drbd0, then start heartbeat on enswitchcdrdb01:&lt;br /&gt;
 sudo service heartbeat start&lt;br /&gt;
&lt;br /&gt;
You should now be able to test heartbeat by running hb_takeover on each box and having it take over the 10.1.0.93 IP and the drbd0 volume.&lt;br /&gt;
 sudo /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
= Configure CDR archiving =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make a backup of the current Enswitch database&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optionally, also backup the individual tables in the current database for easy retreival.  NOTE, this should only be run on a test system, not on production since it will take a very long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE cdrs_backup1 LIKE cdrs;&lt;br /&gt;
 INSERT INTO cdrs_backup1 SELECT * FROM cdrs;&lt;br /&gt;
 CREATE TABLE cdrcosts_backup1 LIKE cdrcosts;&lt;br /&gt;
 INSERT INTO cdrcosts_backup1 SELECT * FROM cdrcosts;&lt;br /&gt;
 CREATE TABLE cdrcost_taxes_backup1 LIKE cdrcost_taxes;&lt;br /&gt;
 INSERT INTO cdrcost_taxes_backup1 SELECT * FROM cdrcost_taxes;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create enswitch database on enswitchcdrdb00: ===&lt;br /&gt;
&lt;br /&gt;
 CREATE DATABASE enswitch;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Create tables: ===&lt;br /&gt;
&lt;br /&gt;
Do 'SHOW CREATE TABLE' on enswitchcdb00 for each of the 3 cdr* tables and run these on enswitchcdrdb00.&lt;br /&gt;
&lt;br /&gt;
Create enswitchcdrsrw user on enswitchcdrdb00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 GRANT ALL ON enswitch.* to enswitchcdrsrw IDENTIFIED BY 'password';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Run enswitch_cdrs_archive_remote manually: ===&lt;br /&gt;
&lt;br /&gt;
 sudo su - enswitch -c &amp;quot;/opt/enswitch/current/bin/enswitch_cdrs_archive_remote 365 debug&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cron configuration ===&lt;br /&gt;
&lt;br /&gt;
Disable the original enswitch_cdrs_archive if it is currently in use.&lt;br /&gt;
&lt;br /&gt;
= Configure web servers =&lt;br /&gt;
&lt;br /&gt;
=== Instruct web interface to use the archive CDR database ===&lt;br /&gt;
&lt;br /&gt;
Add the following to /etc/enswitch/databases.conf on the web servers:&lt;br /&gt;
&lt;br /&gt;
 delete/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 insert/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
 select/cdrs/archive, 1, 100, mysql, 10.1.0.93, 3306, enswitch, enswitchcdrsrw, PASSWORD, 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Restart apache ===&lt;br /&gt;
&lt;br /&gt;
 sudo service apache2 restart&lt;br /&gt;
&lt;br /&gt;
= Configure roles =&lt;br /&gt;
&lt;br /&gt;
In order to allow non System Owner users to search archived CDRs, set the &amp;quot;Call history (archived)&amp;quot; to &amp;quot;Yes&amp;quot; under the appropriate roles.  It would be best to set this for only roles where it is absolutely necessary.&lt;br /&gt;
&lt;br /&gt;
= Configure backups of archived CDRs =&lt;br /&gt;
&lt;br /&gt;
CDRs archived on a remote server will no longer be backed up by the standard Enswitch backup script.&lt;br /&gt;
&lt;br /&gt;
=== backup script ===&lt;br /&gt;
&lt;br /&gt;
This script will make a backup for each day, overwriting the last.&lt;br /&gt;
&lt;br /&gt;
Create /usr/local/sbin/mysql-backup.sh withthe following contents:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 &lt;br /&gt;
 DAY=`date +%a`&lt;br /&gt;
 HOSTNAME=`hostname`&lt;br /&gt;
 MYSQL_USERNAME=$1&lt;br /&gt;
 MYSQL_PASSWORD=$2&lt;br /&gt;
 BACKUP_PATH=$3&lt;br /&gt;
 COMPRESS_LEVEL=$4&lt;br /&gt;
 &lt;br /&gt;
 rm -f $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql*&lt;br /&gt;
 &lt;br /&gt;
 mysqldump -u $MYSQL_USERNAME --password=$MYSQL_PASSWORD --all-databases --skip-lock-tables --single-transaction &amp;gt;  $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 &lt;br /&gt;
 if [ $COMPRESS_LEVEL -gt 0 ] &amp;amp;&amp;amp; [ $COMPRESS_LEVEL -lt 10 ]&lt;br /&gt;
 then&lt;br /&gt;
   xz -$COMPRESS_LEVEL $BACKUP_PATH/mysql_backup-$HOSTNAME-$DAY.sql&lt;br /&gt;
 fi&lt;br /&gt;
&lt;br /&gt;
Change permissions on /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
 sudo chmod +x /usr/local/sbin/mysql-backup.sh&lt;br /&gt;
&lt;br /&gt;
Add cron entry:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;1 0     * * *   root    /usr/local/bin/mysql-backup.sh root PASSWORD /root/mysqlbackups 3&amp;quot; | sudo tee /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
Since the cron file will contain a MySQL password, make it readable only by root&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 700 /etc/cron.d/mysql-backup&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/cdrs/&lt;br /&gt;
&lt;br /&gt;
https://integrics.com/enswitch/guides/3.11/en/field/install/mysql/replication/&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=130</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=130"/>
		<updated>2015-07-17T16:18:32Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
                 after-sb-0pri discard-zero-changes;&lt;br /&gt;
                 after-sb-1pri discard-secondary;&lt;br /&gt;
                 after-sb-2pri disconnect;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd0&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=129</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=129"/>
		<updated>2015-07-17T16:17:26Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd0&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=128</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=128"/>
		<updated>2015-06-18T13:34:51Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Kamailio-&amp;gt;OpenSIPS: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
'''Do the following on sip0 and sip1'''&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=127</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=127"/>
		<updated>2015-06-18T13:34:37Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Kamailio-&amp;gt;OpenSIPS: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
'''Do the following on sip0 and sip1'''&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=126</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=126"/>
		<updated>2015-06-18T13:34:25Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Kamailio-&amp;gt;OpenSIPS: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
'''Do the following on sip0 and sip1'''&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=125</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=125"/>
		<updated>2015-06-18T13:33:49Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* OpenSIPS-&amp;gt;Kamailio: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
'''Do the following on sip0 and sip1'''&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=124</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=124"/>
		<updated>2015-06-18T13:33:28Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* OpenSIPS-&amp;gt;Kamailio: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
'''Do the following on sip0 and sip1'''&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=123</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=123"/>
		<updated>2015-06-18T13:33:10Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* OpenSIPS-&amp;gt;Kamailio: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
Do the following on sip0 and sip1&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=122</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=122"/>
		<updated>2015-06-18T13:32:15Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
Do the following on sip0 and sip1&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=121</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=121"/>
		<updated>2015-06-18T13:32:02Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* OpenSIPS-&amp;gt;Kamailio: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
Do the following on sip0 and sip1&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=120</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=120"/>
		<updated>2015-06-18T13:30:17Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* The following allows the conversion from Kamailio back to OpenSIPS if needed: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Do the following on sip0 and sip1&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kamailio-&amp;gt;OpenSIPS: ==&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=119</id>
		<title>Migrating OpenSIPS location table to Kamailio format</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Migrating_OpenSIPS_location_table_to_Kamailio_format&amp;diff=119"/>
		<updated>2015-06-18T13:29:01Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure allow the user to migrate between OpenSIPS 1.4.4 and Kamailio 3.3.2 without losing the contents of the location table.  This is written for Debian/Ubuntu and may need some changes to work with RedHat/CentOS.  It is based on the official doc at http://www.integrics.com/products/enswitch/guides/3.11/en/field/kamailio/opensips/&lt;br /&gt;
&lt;br /&gt;
The official doc uses the &amp;quot;enswitch kamailio_tables&amp;quot; command which drops the location table, causing all registered lines to not be able to receive calls until they re-register.  We were able to use this procedure to switch our production system from OpenSIPS to Kamailio with less than 30 seconds of downtime.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the purposes of this document, sip0 is the primary OpenSIPS/Kamailio server and sip1 is the standby OpenSIPS/Kamailio server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== OpenSIPS-&amp;gt;Kamailio: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Do the following on sip0 and sip1&lt;br /&gt;
&lt;br /&gt;
 cpan Math::BigInt::FastCalc &lt;br /&gt;
&lt;br /&gt;
 enswitch install kamailio-ha &lt;br /&gt;
&lt;br /&gt;
 vi /etc/kamailio/kamailio.cfg # and set the database URL &lt;br /&gt;
&lt;br /&gt;
'''Update haresources on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
change &amp;quot;opensips&amp;quot; to &amp;quot;kamailio&amp;quot; in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop OpenSIPS on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips stop&lt;br /&gt;
&lt;br /&gt;
'''Remove OpenSIPS init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table on enswitchbd0:'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='5' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert OpenSIPS version of location table to work with Kamailio on enswitchdb0:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_opensips;&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-05-28 21:32:15',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '1',&lt;br /&gt;
  `last_modified` datetime NOT NULL DEFAULT '1900-01-01 00:00:01',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(64) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  PRIMARY KEY (`id`),&lt;br /&gt;
  KEY `account_contact_idx` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Add link for Kamailio init script on sip0 and sip1:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/kamailio /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Start Kamailio on sip0:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio start&lt;br /&gt;
&lt;br /&gt;
'''Test calls on sip0:'''&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip1 and test:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
'''Run hb_takeover on sip0 and test again:'''&lt;br /&gt;
&lt;br /&gt;
 /usr/share/heartbeat/hb_takeover&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== The following allows the conversion from Kamailio back to OpenSIPS if needed: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Kamailio-&amp;gt;OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
'''Update haresources'''&lt;br /&gt;
&lt;br /&gt;
 change &amp;quot;kamailio&amp;quot; to “opensips” in /etc/ha.d/haresources &lt;br /&gt;
&lt;br /&gt;
'''Stop Kamailio:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/kamailio stop&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Remove Kamailio init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo rm /etc/ha.d/resource.d/kamailio&lt;br /&gt;
&lt;br /&gt;
'''Update version for location table'''&lt;br /&gt;
&lt;br /&gt;
 update version set table_version='1004' where table_name='location';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Convert Kamailio version of location table to work with OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 RENAME TABLE location TO location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 CREATE TABLE `location` (&lt;br /&gt;
  `username` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `domain` varchar(128) NOT NULL DEFAULT '',&lt;br /&gt;
  `contact` varchar(255) NOT NULL DEFAULT '',&lt;br /&gt;
  `received` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `path` varchar(255) DEFAULT NULL,&lt;br /&gt;
  `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00',&lt;br /&gt;
  `q` float(10,2) NOT NULL DEFAULT '1.00',&lt;br /&gt;
  `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID',&lt;br /&gt;
  `cseq` int(11) NOT NULL DEFAULT '42',&lt;br /&gt;
  `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,&lt;br /&gt;
  `replicate` int(10) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `state` tinyint(1) unsigned NOT NULL DEFAULT '0',&lt;br /&gt;
  `flags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `cflags` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `user_agent` varchar(100) NOT NULL DEFAULT '',&lt;br /&gt;
  `socket` varchar(128) DEFAULT NULL,&lt;br /&gt;
  `methods` int(11) DEFAULT NULL,&lt;br /&gt;
  `id` int(10) NOT NULL DEFAULT '0',&lt;br /&gt;
  `ruid` varchar(64) NOT NULL DEFAULT '',&lt;br /&gt;
  `reg_id` int(11) NOT NULL DEFAULT '0',&lt;br /&gt;
  `instance` varchar(255) DEFAULT NULL,&lt;br /&gt;
  KEY `username` (`username`,`domain`,`contact`)&lt;br /&gt;
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1&lt;br /&gt;
 SELECT * FROM location_old_kamailio;&lt;br /&gt;
&lt;br /&gt;
'''Add link for OpenSIPS init script:'''&lt;br /&gt;
&lt;br /&gt;
 sudo ln -sf /opt/enswitch/current/etc/init.d/debian/opensips /etc/ha.d/resource.d/opensips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Start OpenSIPS:'''&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/ha.d/resource.d/opensips start&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=118</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=118"/>
		<updated>2015-06-17T21:20:03Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd0&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=115</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=115"/>
		<updated>2015-05-22T18:31:09Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=114</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=114"/>
		<updated>2015-05-22T17:54:16Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/s-resolve-split-brain.html&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=113</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=113"/>
		<updated>2015-05-22T17:54:04Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
 http://drbd.linbit.com/users-guide/s-resolve-split-brain.html&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=112</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=112"/>
		<updated>2015-05-22T16:51:47Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp ifenslave&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=111</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=111"/>
		<updated>2015-05-22T14:53:23Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,hard,timeo=50,fg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=108</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=108"/>
		<updated>2015-05-07T19:08:44Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -t nfs 10.0.0.109:/var/lib/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 10.0.0.109:/mnt/drbd0        /mnt/drbd0      nfs     rsize=32768,wsize=32768,&lt;br /&gt;
hard,timeo=50,bg,actimeo=3,noatime,nodiratime,noauto    0 0&lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=107</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=107"/>
		<updated>2015-05-07T18:36:47Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
Change permissions on /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
 sudo chmod 600 /etc/ha.d/authkeys&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=106</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=106"/>
		<updated>2015-05-07T18:35:20Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/authkeys on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 auth 2&lt;br /&gt;
 1 crc&lt;br /&gt;
 2 sha1 YG89uXsBVF0ufX7iy8w10FRrThwB2zcs&lt;br /&gt;
 3 md5 enswitch&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=105</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=105"/>
		<updated>2015-05-07T18:32:15Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=104</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=104"/>
		<updated>2015-05-07T18:31:50Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 696&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=103</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=103"/>
		<updated>2015-05-07T18:29:53Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/ha.cf:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 695&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.123&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 debug 1&lt;br /&gt;
 debugfile /var/log/heartbeat_debug.log&lt;br /&gt;
 logfile /var/log/heartbeat.log&lt;br /&gt;
 logfacility local0&lt;br /&gt;
 &lt;br /&gt;
 keepalive 1&lt;br /&gt;
 deadtime 10&lt;br /&gt;
 warntime 5&lt;br /&gt;
 initdead 60&lt;br /&gt;
 &lt;br /&gt;
 udpport 695&lt;br /&gt;
 bcast eth0&lt;br /&gt;
 ucast eth0 10.0.0.122&lt;br /&gt;
 ping 10.0.0.1&lt;br /&gt;
 &lt;br /&gt;
 auto_failback off&lt;br /&gt;
 node enswitchstorage0&lt;br /&gt;
 node enswitchstorage1&lt;br /&gt;
 respawn hacluster /usr/lib/heartbeat/ipfail&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=102</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=102"/>
		<updated>2015-05-07T18:25:42Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
The gateway IP address is 10.0.0.1&lt;br /&gt;
&lt;br /&gt;
The shared IP for heartbeat is 10.0.0.109&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=101</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=101"/>
		<updated>2015-05-07T18:24:59Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
Configure NFS server:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/exports on enswitchstorage0 and enswitch storage1:&lt;br /&gt;
&lt;br /&gt;
 /var/lib/enswitch 10.0.0.0/24(rw,no_root_squash,async,no_subtree_check,fsid=0)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-export NFS shares&lt;br /&gt;
&lt;br /&gt;
 sudo exportfs -ra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
Create /etc/ha.d/haresources:&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
On enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0 IPaddr::10.0.0.109/24/eth0 drbddisk::drbd0 Filesystem::/dev/drbd0::/var/lib/enswitch::ext4 nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=100</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=100"/>
		<updated>2015-05-07T18:03:19Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd0;&lt;br /&gt;
                 disk /dev/sda2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=99</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=99"/>
		<updated>2015-05-07T16:20:14Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd-enswitch {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=98</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=98"/>
		<updated>2015-05-07T16:04:29Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av --delete  /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_GlusterFS&amp;diff=97</id>
		<title>Enswitch storage on GlusterFS</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_GlusterFS&amp;diff=97"/>
		<updated>2015-05-07T16:04:02Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Cutover procedure: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from NFS storage to GlusterFS storage.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The GlusterFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New GlusterFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New GlusterFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the GlusterFS volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Upgrade 14.04 kernel to 3.16 to take advantage of new metadata checksum and free inode btree options in XFS:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get install --install-recommends linux-generic-lts-utopic&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -p tcp --dport 24007 -m state --state NEW -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow GlusterFS Daemon from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -p tcp --dport 24008 -m state --state NEW -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow GlusterFS Management from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -p tcp --dport 49152 -m state --state NEW -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow connections to GlusterFS brick #1 from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -p tcp --dport 111 -m state --state NEW -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow portmapper from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -p udp --dport 111 -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow portmapper from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure apt source for GlusterFS 3.5:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;deb http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu trusty main&amp;quot; | sudo tee /etc/apt/sources.list.d/glusterfs.list&lt;br /&gt;
 echo &amp;quot;deb-src http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu trusty main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/glusterfs.list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a file named /tmp/glusterfs-repo.key contianing the following:&lt;br /&gt;
&lt;br /&gt;
 -----BEGIN PGP PUBLIC KEY BLOCK-----&lt;br /&gt;
 Version: SKS 1.1.4&lt;br /&gt;
 Comment: Hostname: keyserver.ubuntu.com&lt;br /&gt;
 &lt;br /&gt;
 mQINBFPtYFcBEADcQMZ9aSR1ptbaEeq/8Bzu7kipaxVGh3Wcma4Lz+QPe0ofxRf+oYR22UVG&lt;br /&gt;
 mJcPnVcGFbXJ50t8BAxwtQ/TSmGdQ93bl6LORAQBZ/ud1LTr2HKpaa0F1bwpi/TAgBWqP64H&lt;br /&gt;
 u0LBGISc0G5m3/hn/bi6XxIIOzJ/L/vqLh1deVaDrYYWy5Cme8AuPtqOARKsefvVgwlpnbCt&lt;br /&gt;
 k+QaE65vgl8MXiYCaOenT07GDCq1xb7hkoVlJS4bf6F3UMJVMVy4oEyYkRw4SP7ULeT1s4yr&lt;br /&gt;
 BeDzbxhFaZRJFvGpvMW3AZxfrhX/5OpZSkQiFn5/2j4eJli4/MmptAAHpGr4tLA+s6mHmA9E&lt;br /&gt;
 9c7wMfyFZe+wMhvangSDp09gSSZs00bqKSnYIJ/oGRjaxCllkw4SMfTOqv8l/GOxRs12yIcZ&lt;br /&gt;
 D08SSmRpoyLffrl1zElyaixtAJRenphTZyq7eRLPyQl6qEDA1XtLs3ThK5/4fghMbe7MOHiM&lt;br /&gt;
 B8MwL1RzLQkl/PU08vxfum9ki/m/LP5xpJopNHZs2L47RlX2+tq6FJWbDvQwOGoFTTnxmdDf&lt;br /&gt;
 4EkMhlB4N+ujZw64pSMt3c08NShxty2UWpbSbc8/e7Ps4B7Lx6eq6AmqrcUChg8c9+PI2LUq&lt;br /&gt;
 j6mDbc8jxpUslvjsLU05xnq6OLv4U//pUTUz6eI8FgFadVZcoQARAQABtBlMYXVuY2hwYWQg&lt;br /&gt;
 UFBBIGZvciBHbHVzdGVyiQI4BBMBAgAiBQJT7WBXAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIe&lt;br /&gt;
 AQIXgAAKCRAT4Bt7P+hpqZ3LEACYYC4UjxwSHouV295Cxfwt9P32GcWJbFmLYtLHWVTt2vdN&lt;br /&gt;
 /M9Xb02YgVLJm/nVy2vJhqcMowSW2jO503mLq672g5mHitnIq1lh4zXcHEvP79aDRQuvkgsL&lt;br /&gt;
 EHjlk2NzYqdAsdRk3TgOLcK0SRM7Cwgwd/b/gVUtPYrX1hvQKrjGJM9VZFcCMX2RmGAS0ft3&lt;br /&gt;
 QHzEAPZCgyamk0qB2eo8tLZYm42iMvq+ZSxGulhzi7gJkpv/wNdaP4E6o8o7KY3JIWMmxBn8&lt;br /&gt;
 QZUKYMobze4PSBg4G4iG2ue9IrGCb8M1o+46aOSyEIc99bznF8Jrw7a8sBufVRjSZIE9A/oM&lt;br /&gt;
 EtB1pTRDn9lwx/DyYbCV16DOsk6d5x4P8cqvgdaGzl7VNLvkwmMaCH0gRFIBr937rEUbeSJH&lt;br /&gt;
 TqrVG0zXzSaUHEwXPZE0Lt2C9dEmMnT6nxC7FbJB1ATPDNx8kL7MvB4jl5HkjrD1W9Xu2y0d&lt;br /&gt;
 zwAKlg5jvzwP46MJgvm+AYK808XhOhMZjWzzt5POeDcDhGhpRSfQtAhSnRkOtKS1drMCt27h&lt;br /&gt;
 LZDEZfCp//aj7jvVL8FjamGEMfm91FLQa5LY7OoJaYoZlYUtthrXV6w5KHFjFYAKgA8tJzeb&lt;br /&gt;
 Tvc1Q9avCo2G5qWNZq6TSLxHEMo/g4gu2aGRPRrKu9w2Ibosg4OqZ/YbXC8SjA==&lt;br /&gt;
 =+Qna&lt;br /&gt;
 -----END PGP PUBLIC KEY BLOCK-----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Import repository key:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-key add /tmp/glusterfs-repo.key&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install GlusterFS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get install glusterfs-server ntp xfsprogs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download and install xfsprogs from a newer distribution.  This is necessary to use the metadata checksum and free inode btree options:&lt;br /&gt;
&lt;br /&gt;
 cd /tmp&lt;br /&gt;
 wget http://mirrors.kernel.org/ubuntu/pool/main/x/xfsprogs/xfsprogs_3.2.1ubuntu1_amd64.deb&lt;br /&gt;
 sudo dpkg -i xfsprogs_3.2.1ubuntu1_amd64.deb&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the GlusterFS volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2 as the glusterfs volume:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
 sudo mkdir -p /var/glusterfs/sda2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If your filesystem is on a standalone hard drive use the following options:&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.xfs -m crc=1,finobt=1 /dev/sda2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If your filesystem is on a raid device, add options for stripe size and stripe width.  Stripe size is whatever was configured in the RAID controller when you create the volume, and stripe with is the number of data disks.  See http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance for more information.  In this example we are using a RAID 10 volume on 6 disks with a 64K stripe size:&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.xfs -m crc=1,finobt=1 -d su=64k,sw=3 /dev/sda2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mount the GlusterFS partition on enswitchstorage0 and enswitchstorage1.&lt;br /&gt;
&lt;br /&gt;
If the filesystem is smaller than 1TB add the following to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /dev/sda2       /var/glusterfs/sda2	xfs    defaults      0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the filesystem is 1TB or larger, use the inode64 option instead.  This allows inodes to be placed above 1TB which will help avoid issues.  See http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F for a more in depth explanation:&lt;br /&gt;
&lt;br /&gt;
 /dev/sda2       /var/glusterfs/sda2	xfs    inode64      0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mount the partition manually:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/glusterfs/sda2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create directory for enswitch volume:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/glusterfs/sda2/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize the peer from enswitchstorage0  This command should return &amp;quot;peer probe: success&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo gluster peer probe enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check the peer status:&lt;br /&gt;
 sudo gluster peer status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You should see this:&lt;br /&gt;
&lt;br /&gt;
 Number of Peers: 1&lt;br /&gt;
 &lt;br /&gt;
 Hostname: enswitchstorage1&lt;br /&gt;
 Port: 24007&lt;br /&gt;
 Uuid: 2eed9049-4f5c-4e14-8d49-8935df95c9fe&lt;br /&gt;
 State: Peer in Cluster (Connected)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create GlusterFS volume, the only needs to be run on enswitchstorage0:&lt;br /&gt;
&lt;br /&gt;
 sudo gluster volume create enswitch replica 2 transport tcp enswitchstorage0:/var/glusterfs/sda2/enswitch enswitchstorage1:/var/glusterfs/sda2/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You should see the following if the creation was successful:&lt;br /&gt;
&lt;br /&gt;
 volume create: enswitch: success: please start the volume to access data&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Start the volume:&lt;br /&gt;
&lt;br /&gt;
 sudo gluster volume start enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Look at volume info:&lt;br /&gt;
&lt;br /&gt;
 sudo gluster volume info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 Volume Name: enswitch&lt;br /&gt;
 Type: Replicate&lt;br /&gt;
 Volume ID: 3276be02-515f-41b1-ab22-a766724c8e64&lt;br /&gt;
 Status: Started&lt;br /&gt;
 Number of Bricks: 1 x 2 = 2&lt;br /&gt;
 Transport-type: tcp&lt;br /&gt;
 Bricks:&lt;br /&gt;
 Brick1: enswitchstorage0:/var/glusterfs/sda2/enswitch&lt;br /&gt;
 Brick2: enswitchstorage1:/var/glusterfs/sda2/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure settings for better performance:&lt;br /&gt;
&lt;br /&gt;
 sudo gluster volume set enswitch performance.cache-size 1GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create local mountpoint on enswitchstorage0 and enswitchstorage1 so that the shared files can be accessed locally.  NOTE: do not modify any files directly under /var/glusterfs/sda2/enswitch, as this will cause corruption:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0:/enswitch /var/lib/enswitch glusterfs        direct-io-mode=disable,backupvolfile-server=enswitchstorage1,_netdev 0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mount /var/lib/enswitch on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
 sudo chown enswitch:enswitch /var/lib/enswitch&lt;br /&gt;
 sudo chmod 775 /var/lib/enswitch&lt;br /&gt;
 sudo chmod g+s /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
== Client install ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure apt source for GlusterFS 3.5&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;deb http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu precise main&amp;quot; | sudo tee /etc/apt/sources.list.d/glusterfs.list&lt;br /&gt;
 echo &amp;quot;deb-src http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu precise main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/glusterfs.list&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a file named /tmp/glusterfs-repo.key containing the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 -----BEGIN PGP PUBLIC KEY BLOCK-----&lt;br /&gt;
 Version: SKS 1.1.4&lt;br /&gt;
 Comment: Hostname: keyserver.ubuntu.com&lt;br /&gt;
 &lt;br /&gt;
 mQINBFPtYFcBEADcQMZ9aSR1ptbaEeq/8Bzu7kipaxVGh3Wcma4Lz+QPe0ofxRf+oYR22UVG&lt;br /&gt;
 mJcPnVcGFbXJ50t8BAxwtQ/TSmGdQ93bl6LORAQBZ/ud1LTr2HKpaa0F1bwpi/TAgBWqP64H&lt;br /&gt;
 u0LBGISc0G5m3/hn/bi6XxIIOzJ/L/vqLh1deVaDrYYWy5Cme8AuPtqOARKsefvVgwlpnbCt&lt;br /&gt;
 k+QaE65vgl8MXiYCaOenT07GDCq1xb7hkoVlJS4bf6F3UMJVMVy4oEyYkRw4SP7ULeT1s4yr&lt;br /&gt;
 BeDzbxhFaZRJFvGpvMW3AZxfrhX/5OpZSkQiFn5/2j4eJli4/MmptAAHpGr4tLA+s6mHmA9E&lt;br /&gt;
 9c7wMfyFZe+wMhvangSDp09gSSZs00bqKSnYIJ/oGRjaxCllkw4SMfTOqv8l/GOxRs12yIcZ&lt;br /&gt;
 D08SSmRpoyLffrl1zElyaixtAJRenphTZyq7eRLPyQl6qEDA1XtLs3ThK5/4fghMbe7MOHiM&lt;br /&gt;
 B8MwL1RzLQkl/PU08vxfum9ki/m/LP5xpJopNHZs2L47RlX2+tq6FJWbDvQwOGoFTTnxmdDf&lt;br /&gt;
 4EkMhlB4N+ujZw64pSMt3c08NShxty2UWpbSbc8/e7Ps4B7Lx6eq6AmqrcUChg8c9+PI2LUq&lt;br /&gt;
 j6mDbc8jxpUslvjsLU05xnq6OLv4U//pUTUz6eI8FgFadVZcoQARAQABtBlMYXVuY2hwYWQg&lt;br /&gt;
 UFBBIGZvciBHbHVzdGVyiQI4BBMBAgAiBQJT7WBXAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIe&lt;br /&gt;
 AQIXgAAKCRAT4Bt7P+hpqZ3LEACYYC4UjxwSHouV295Cxfwt9P32GcWJbFmLYtLHWVTt2vdN&lt;br /&gt;
 /M9Xb02YgVLJm/nVy2vJhqcMowSW2jO503mLq672g5mHitnIq1lh4zXcHEvP79aDRQuvkgsL&lt;br /&gt;
 EHjlk2NzYqdAsdRk3TgOLcK0SRM7Cwgwd/b/gVUtPYrX1hvQKrjGJM9VZFcCMX2RmGAS0ft3&lt;br /&gt;
 QHzEAPZCgyamk0qB2eo8tLZYm42iMvq+ZSxGulhzi7gJkpv/wNdaP4E6o8o7KY3JIWMmxBn8&lt;br /&gt;
 QZUKYMobze4PSBg4G4iG2ue9IrGCb8M1o+46aOSyEIc99bznF8Jrw7a8sBufVRjSZIE9A/oM&lt;br /&gt;
 EtB1pTRDn9lwx/DyYbCV16DOsk6d5x4P8cqvgdaGzl7VNLvkwmMaCH0gRFIBr937rEUbeSJH&lt;br /&gt;
 TqrVG0zXzSaUHEwXPZE0Lt2C9dEmMnT6nxC7FbJB1ATPDNx8kL7MvB4jl5HkjrD1W9Xu2y0d&lt;br /&gt;
 zwAKlg5jvzwP46MJgvm+AYK808XhOhMZjWzzt5POeDcDhGhpRSfQtAhSnRkOtKS1drMCt27h&lt;br /&gt;
 LZDEZfCp//aj7jvVL8FjamGEMfm91FLQa5LY7OoJaYoZlYUtthrXV6w5KHFjFYAKgA8tJzeb&lt;br /&gt;
 Tvc1Q9avCo2G5qWNZq6TSLxHEMo/g4gu2aGRPRrKu9w2Ibosg4OqZ/YbXC8SjA==&lt;br /&gt;
 =+Qna&lt;br /&gt;
 -----END PGP PUBLIC KEY BLOCK-----&lt;br /&gt;
&lt;br /&gt;
Import repository key:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-key add /tmp/glusterfs-repo.key&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install GlusterFS client packages:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get install glusterfs-client ntp&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the GlusterFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 sudo mount -o direct-io-mode=disable,backupvolfile-server=enswitchstorage1 enswitchstorage0:/enswitch /var/lib/enswitch2&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av --delete /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 enswitchstorage0:/enswitch /var/lib/enswitch glusterfs        direct-io-mode=disable,backupvolfile-server=enswitchstorage1,_netdev 0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mount new GlusterFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
http://www.server-world.info/en/note?os=Ubuntu_14.04&amp;amp;p=glusterfs&amp;amp;f=2&lt;br /&gt;
&lt;br /&gt;
http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules&lt;br /&gt;
&lt;br /&gt;
http://www.jamescoyle.net/how-to/351-share-glusterfs-volume-to-a-single-ip-address&lt;br /&gt;
&lt;br /&gt;
https://launchpad.net/~gluster&lt;br /&gt;
&lt;br /&gt;
https://www.howtoforge.com/creating-an-nfs-like-standalone-storage-server-with-glusterfs-3.2.x-on-ubuntu-12.10&lt;br /&gt;
&lt;br /&gt;
http://www.jamescoyle.net/how-to/559-glusterfs-performance-tuning&lt;br /&gt;
&lt;br /&gt;
http://toruonu.blogspot.com/2012/12/xfs-vs-ext4.html&lt;br /&gt;
&lt;br /&gt;
http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/&lt;br /&gt;
&lt;br /&gt;
http://xfs.org/index.php/XFS_FAQ&lt;br /&gt;
&lt;br /&gt;
https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=96</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=96"/>
		<updated>2015-05-07T16:03:08Z</updated>

		<summary type="html">&lt;p&gt;Danthony: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Cutover procedure: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On one of the current NFS boxes, mount the new NFS volume as /var/lib/enswitch2/ and rsync data:&lt;br /&gt;
&lt;br /&gt;
 sudo mkdir /var/lib/enswitch2&lt;br /&gt;
 &lt;br /&gt;
 sudo rsync -av /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unmount /var/lib/enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo umount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rsync data one more time on old NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo rsync -av /var/lib/enswitch/ /var/lib/enswitch2/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab on all other servers (commenting out the current line for the /var/lib/enswitch nfs mount):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Mount new NFS volume on all Enswitch servers:&lt;br /&gt;
&lt;br /&gt;
 sudo mount /var/lib/enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restart enswitch on all servers&lt;br /&gt;
&lt;br /&gt;
 sudo enswitch restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=95</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=95"/>
		<updated>2015-05-07T15:59:49Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renumber libuuid user and group to free up uid 100 and gid 101.  Then change the UID/GID of the libuuid files to match their new groups:&lt;br /&gt;
&lt;br /&gt;
 sudo chown libuuid:libuuid /usr/sbin/uuidd&lt;br /&gt;
 sudo chown libuuid:libuuid /var/lib/libuuid&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Enswitch user and group.  There is no Enswitch code on these boxes, but this will make the file ownership show &amp;quot;enswitch:enswitch&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 sudo adduser --system --group --no-create-home --home /var/lib/enswitch/home --disabled-password enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=94</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=94"/>
		<updated>2015-05-07T15:54:05Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can run &amp;quot;watch cat /proc/drbd&amp;quot; to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=93</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=93"/>
		<updated>2015-05-07T15:49:46Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.122:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.0.0.123:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=92</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=92"/>
		<updated>2015-05-07T15:49:09Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=91</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=91"/>
		<updated>2015-05-07T15:48:51Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
 &lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=90</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=90"/>
		<updated>2015-05-07T15:47:35Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=89</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=89"/>
		<updated>2015-05-07T15:46:36Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=88</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=88"/>
		<updated>2015-05-07T15:46:25Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
	<entry>
		<id>http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=87</id>
		<title>Enswitch storage on NFS with DRBD and Heartbeat</title>
		<link rel="alternate" type="text/html" href="http://wiki.integrics.com/index.php?title=Enswitch_storage_on_NFS_with_DRBD_and_Heartbeat&amp;diff=87"/>
		<updated>2015-05-07T15:45:00Z</updated>

		<summary type="html">&lt;p&gt;Danthony: /* Server configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Disclaimer ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following comes with no warranty whatsoever.  I am not responsible for any data loss or other issues that may arise from following these instructions.  Please make backups of all files and test this thoroughly in your lab environment before using it in production.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
This document details the procedure for migrating a multi-machine Enswitch system from a single NFS storage server to a fault tolerant cluster using NFS with DRBD and Heartbeat.&lt;br /&gt;
&lt;br /&gt;
The procedure has been tested on Enswitch 3.11, but should work on most other versions.  The NFS servers run Ubuntu 14.04 64bit and the clients run Ubuntu 10.04 64bit and 12.04 64bit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The servers are as follows:&lt;br /&gt;
&lt;br /&gt;
enswitchnfs0 - current active NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchnfs1 - current backup NFS server&lt;br /&gt;
&lt;br /&gt;
enswitchstorage0 - New NFS server 0&lt;br /&gt;
&lt;br /&gt;
enswitchstorage1 - New NFS server 1&lt;br /&gt;
&lt;br /&gt;
The Enswitch subnet is 10.0.0.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server configuration ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Load enswitchstorage0 and enswitchstorage1 with Ubuntu 14.04 64bit.  Make a partition for the OS and leave the rest of the disk empty for the DRBD volume.  Do not create a swap partition, a swap file will be added later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Update all OS packages on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get update&lt;br /&gt;
 sudo apt-get dist-upgrade&lt;br /&gt;
 sudo apt-get autoremove&lt;br /&gt;
 sudo init 6&lt;br /&gt;
&lt;br /&gt;
Create swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo dd if=/dev/zero of=/swapfile0 bs=1M count=2048&lt;br /&gt;
 sudo chmod 0600 /swapfile0&lt;br /&gt;
 sudo mkswap /swapfile0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add the following line to /etc/fstab:&lt;br /&gt;
&lt;br /&gt;
 /swapfile0              none            swap            sw              0 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable swap file:&lt;br /&gt;
&lt;br /&gt;
 sudo swapon -a&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install ntp&lt;br /&gt;
&lt;br /&gt;
Install additional software on enswitchstorage0 and enswitchstorage1 (optional):&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install htop iotop bwm-ng tshark &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add firewall rules on enswitchstorage0 and enswitchstorage1.  The following can be used as the base for a firewall script:&lt;br /&gt;
&lt;br /&gt;
 iptables -F -m comment --comment &amp;quot;Clear all existing rules&amp;quot;&lt;br /&gt;
 iptables -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment &amp;quot;Allow packets from related and established connections&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -i lo -m comment --comment &amp;quot;Allow all on lo interface&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -s 10.0.0.0/24 -m comment --comment &amp;quot;Allow everything from Enswitch subnet&amp;quot; -j ACCEPT&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Log all unmatched packets&amp;quot; -j LOG&lt;br /&gt;
 iptables -A INPUT -m comment --comment &amp;quot;Drop all unmatched packets&amp;quot; -j DROP&lt;br /&gt;
&lt;br /&gt;
Add entries to /etc/hosts for each server on enswitchstorage0 and enswitchstorage1:&lt;br /&gt;
&lt;br /&gt;
 10.0.0.122   enswitchstorage0&lt;br /&gt;
 10.0.0.123   enswitchstorage1&lt;br /&gt;
&lt;br /&gt;
Create partition for the DRBD volume on enswitchstorage0 and enswitchstorage1, in this example we use /dev/sda2:&lt;br /&gt;
&lt;br /&gt;
 sudo fdisk&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Install DRDB utilities:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install drbd8-utils&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create /etc/drbd.conf on both servers with the following contents:&lt;br /&gt;
&lt;br /&gt;
 global { usage-count no; }&lt;br /&gt;
 common { syncer { rate 100M; } }&lt;br /&gt;
 resource drbd0 {&lt;br /&gt;
         protocol C;&lt;br /&gt;
         startup {&lt;br /&gt;
                 wfc-timeout  15;&lt;br /&gt;
                 degr-wfc-timeout 60;&lt;br /&gt;
         }&lt;br /&gt;
         net {&lt;br /&gt;
                 cram-hmac-alg sha1;&lt;br /&gt;
                 shared-secret &amp;quot;secret&amp;quot;;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage0 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.200:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
         on enswitchstorage1 {&lt;br /&gt;
                 device /dev/drbd-enswitch;&lt;br /&gt;
                 disk /dev/sdb2;&lt;br /&gt;
                 address 10.196.11.201:7788;&lt;br /&gt;
                 meta-disk internal;&lt;br /&gt;
         }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create volume on both servers:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm create-md drbd-enswitch&lt;br /&gt;
 sudo service drbd start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Initialize volume, run the following on the primary server, in this case enswitchstorage0.  You can look at /proc/drbd to see the status of the rebuild:&lt;br /&gt;
&lt;br /&gt;
 sudo drbdadm -- --overwrite-data-of-peer primary all&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the sync is complete, create a filesystem on /dev/drbd0&lt;br /&gt;
&lt;br /&gt;
 sudo mkfs.ext4 /dev/drbd-enswitch&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install heartbeat:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install heartbeat&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure heartbeat:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install NFS server:&lt;br /&gt;
&lt;br /&gt;
 sudo apt-get install nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/NFSv4Howto&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/lts/serverguide/drbd.html&lt;br /&gt;
 &lt;br /&gt;
 https://www.howtoforge.com/high-availability-nfs-with-drbd-plus-heartbeat&lt;br /&gt;
&lt;br /&gt;
 https://help.ubuntu.com/community/HighlyAvailableNFS&lt;/div&gt;</summary>
		<author><name>Danthony</name></author>
		
	</entry>
</feed>