LinuxEye - Linux系统教程

LinuxEye - Linux系统教程

当前位置: 主页 > 架构 >

NFS+Heartbeat+Drbd高可用架构(3)

时间:2015-01-05 22:53来源:未知 编辑:linuxeye 点击:
五、NFS安装部署 该操作依旧仅以M1为例,M2操作亦如此。 1、安装nfs [root@M1 drbd]# yum install nfs-utils rpcbind -y[root@M2 ~]# yum install nfs-utils rpcbind -y 2、配置 nfs 共享

五、NFS安装部署
该操作依旧仅以M1为例,M2操作亦如此。
1、安装nfs
[root@M1 drbd]# yum install nfs-utils rpcbind -y
[root@M2 ~]# yum install nfs-utils rpcbind -y
2、配置 nfs 共享目录
[root@M1 drbd]# cat /etc/exports 
/data 192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)
[root@M2 ~]# cat /etc/exports 
/data 192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)
3、启动 rpcbind 和 nfs 服务
[root@M1 drbd]# /etc/init.d/rpcbind start;chkconfig rpcbind off 
[root@M1 drbd]# /etc/init.d/nfs start;chkconfig nfs off 
Starting NFS services: [ OK ] 
Starting NFS quotas: [ OK ] 
Starting NFS mountd: [ OK ] 
Starting NFS daemon: [ OK ] 
Starting RPC idmapd: [ OK ]
[root@M2 drbd]# /etc/init.d/rpcbind start;chkconfig rpcbind off 
[root@M2 drbd]# /etc/init.d/nfs start;chkconfig nfs off 
Starting NFS services: [ OK ] 
Starting NFS quotas: [ OK ] 
Starting NFS mountd: [ OK ] 
Starting NFS daemon: [ OK ] 
Starting RPC idmapd: [ OK ]192
4、测试 nfs
[root@C1 ~] # mount -t nfs -o noatime,nodiratime 192.168.0.219:/data /xxxxx/ 
[root@C1 ~] # df -h|grep data 
192.168.0.219:/data 126G 1.1G 118G 1% /data
[root@C1 ~] # cd /data
[root@C1 data] # ls 
lost+found test 
[root@C1 data] # echo 'nolinux' >> nihao
[root@C1 data] # ls 
lost+found nihao test
[root@C1 data] # cat nihao 
nolinux

六、整合Heartbeat、DRBD和NFS服务
注意,一下修改的heartbeat的文件和脚本都需要在M1和M2上保持相同配置!

1、修改 heartbeat 资源定义文件
修改heartbeat的资源定义文件,添加对drbd服务、磁盘挂载、nfs服务的自动管理,修改结果如下:
[root@M1 ~]# cat /etc/ha.d/haresources
M1.redhat.sx IPaddr::192.168.0.219/24/em1 drbddisk::drbd Filesystem::/dev/drbd0::/data::ext4 nfsd
这里需要注意的是,配置文件中使用的IPaddr、drbddisk都是存在于/etc/ha.d/resource.d/目录下的,该目录下自带了很多服务管理脚本,来提供给heartbeat服务调用。而后面的nfsd,默认heartbeat是不带的,这里附上该脚本。
[root@M1 /]# vim /etc/ha.d/resource.d/nfsd
#!/bin/bash
#
case $1 in
start)
    /etc/init.d/nfs restart
    ;;
stop)
   for proc in rpc.mountd rpc.rquotad nfsd nfsd
        do
             killall -9 $proc
    done
    ;;
esac
[root@M1 /]# chmod 755 /etc/ha.d/resource.d/nfsd
虽然,系统自带了nfs的启动脚本,但是在 heartbeat 调用时无法彻底杀死 nfs 进程,因此才需要我们自己编写启动脚本。

2、重启heartbeat,启动 NFS 高可用
一下操作,最好按顺序!
[root@M1 ~]# /etc/init.d/heartbeat stop 
Stopping High-Availability services: 
Done. 
[root@M2 ~]# /etc/init.d/heartbeat stop 
Stopping High-Availability services: 
Done. 
[root@M1 ~]# /etc/init.d/heartbeat start 
Starting High-Availability services: INFO: Resource is stopped 
Done.
[root@M2 ~]# /etc/init.d/heartbeat start 
Starting High-Availability services: INFO: Resource is stopped 
Done.
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1
[root@M2 ~]# ip a |grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:24936 nr:13016 dw:37920 dr:17307 al:15 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:84 nr:24 dw:37896 dr:10589 al:14 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
C1 端挂载测试:
[root@C1 ~] # mount 192.168.0.219:/data /data 
[root@C1 ~] # df -h |grep data
192.168.0.219:/data 126G 60M 119G 1% /data
OK,可以看出C1客户端能够通过VIP成功挂载NFS高可用存储共享出来的NFS服务。

3、测试
这里,将进行对NFS高可用集群进行测试,看遇到故障之后,是否服务能够正常切换。
a、测试关闭heartbeat服务后,nfs服务是否正常
M1端heartbeat服务宕前,M1端状态:
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:8803768 nr:3736832 dw:12540596 dr:5252 al:2578 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服务宕前,M2端状态:
[root@M2 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:4014352 nr:11417156 dw:15431508 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
宕掉M1端heartbeat服务:
[root@M1 ~]# /etc/init.d/heartbeat stop 
Stopping High-Availability services: Done.
M1端heartbeat服务宕后,M1端状态:
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:11417152 nr:4014300 dw:15431448 dr:7037 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服务宕后,M2端状态:
[root@M2 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:4014300 nr:11417152 dw:15431452 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢复M1端的heartbeat服务,看M2是否回切
恢复M1端heartbeat服务:
[root@M1 ~]# /etc/init.d/heartbeat start 
Starting High-Availability services: INFO: Resource is stopped 
Done.
M1端heartbeat服务恢复后,M1端状态:
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:11417156 nr:4014352 dw:15431504 dr:7874 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服务恢复后,M2端状态:
[root@M2 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:4014352 nr:11417156 dw:15431508 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
C1端针对NFS切换的受影响效果分析:
[root@C1 ~] #  for i in `seq 1 10000`;do dd if=/dev/zero of=/data/test$i bs=10M count=1;stat /data/test$i|grep 'Access: 2014';done   # 这里仅仅截取部分输出
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 15.1816 s, 691 kB/s
Access: 2014-11-12 23:26:15.945546803 +0800
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.20511 s, 51.1 MB/s
Access: 2014-11-12 23:28:11.687931979 +0800
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.20316 s, 51.6 MB/s
Access: 2014-11-12 23:28:11.900936657 +0800
注意:目测,NFS必须需要2分钟的延迟。测试了很多方法,这个问题目前尚未解决!

b、测试关闭心跳线之外的网络后,nfs服务是否正常
M1端em1网口宕前,M1端状态:
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:11417156 nr:4014352 dw:15431504 dr:7874 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
宕掉M1端的em1网口:
[root@M1 ~]# ifdown em1
M1端em1网口宕后,M1端状态:(在M2端上通过心跳线,SSH到M1端)
[root@M1 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:11993288 nr:4024660 dw:16017944 dr:8890 al:3222 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端em1网口宕后,M2端状态:
[root@M2 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:4024620 nr:11993288 dw:16017908 dr:7090 al:1171 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢复M1端的em1网口:
[root@M1 ~]# ifup em1
恢复M1端的em1网口,M1端状态:
[root@M1 ~]# ip a |grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 
inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 
[root@M1 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- 
ns:11993292 nr:4024680 dw:16017968 dr:9727 al:3222 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢复M1端的em1网口,M2端状态:
[root@M2 ~]# ip a|grep em1 
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 
[root@M2 ~]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101) 
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- 
ns:4024680 nr:11993292 dw:16017972 dr:7102 al:1171 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
后来在测试过程中,由于NFS是靠RPC机制来进行通信的,受RPCBIND机制的影响,导致NFS服务端切换之后,NFS的客户端会受到1-2分的延迟。在NFS客户端频繁写入的情况下时间可能会更久,在NFS客户端无写入时,依旧需要一分钟多。因此,后来弃用了这种架构。不知道51的博友们,是如何解决NFS服务端切换导致NFS挂载客户端延时这个问题的呢?
原文:http://nolinux.blog.51cto.com/4824967/1591739

转载请保留固定链接: https://linuxeye.com/architecture/2045.html

------分隔线----------------------------
标签:heartbeatnfsDRBD
栏目列表
推荐内容