LinuxEye - Linux系统教程

LinuxEye - Linux系统教程

当前位置: 主页 > Linux配置 >

高可用、多路冗余GFS2集群文件系统详细配置(2)

时间:2013-06-09 18:01来源:51CTO 编辑:凌激冰 点击:
4、 安装存储管理管理软件,并导出磁盘 [root@storage1 ~]# fdisk /dev/sda \\创建一个大小为2G的逻辑分区并导出Command (m for help): nCommand actione extendedp primary partit

4、安装存储管理管理软件,并导出磁盘
[root@storage1 ~]# fdisk /dev/sda  \\创建一个大小为2G的逻辑分区并导出
Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
e
Selected partition 4
First cylinder (1562-2610, default 1562):
Using default value 1562
Last cylinder, +cylinders or +size{K,M,G} (1562-2610, default 2610): +4G
Command (m for help): n
First cylinder (1562-2084, default 1562):
Using default value 1562
Last cylinder, +cylinders or +size{K,M,G} (1562-2084, default 2084): +2G
Command (m for help): w
……
[root@storage1 ~]# partx -a /dev/sda
[root@storage1 ~]# ll /dev/sda
sda   sda1  sda2  sda3  sda4  sda5
[root@storage1 ~]# yum install scsi-target-utils –y  \\安装target管理端
[root@storage1 ~]# vim /etc/tgt/targets.conf \\配置导出磁盘的信息
<target iqn.2013.05.org.rsyslog:storage1.sda5>
<backing-store /dev/sda5>
scsi_id storage1_id
scsi_sn storage1_sn
</backing-store>
incominguser xiaonuo 081ac67e74a6bb13b7a22b8a89e7177b \\设置用户名及密码访问
initiator-address 192.168.100.173  \\设置允许的IP地址
initiator-address 192.168.100.174
initiator-address 192.168.100.175
initiator-address 192.168.100.176
initiator-address 192.168.200.173
initiator-address 192.168.200.174
initiator-address 192.168.200.175
initiator-address 192.168.200.176
</target>
[root@storage1 ~]# /etc/rc.d/init.d/tgtd start  && chkconfig tgtd on
[root@storage1 ~]# tgtadm --lld iscsi --mode target --op show  \\查看是否导出成功
Target 1: iqn.2013.05.org.rsyslog:storage1.sda5
……
LUN: 1
Type: disk
SCSI ID: storage1_id
SCSI SN: storage1_sn
Size: 2151 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sda5
Backing store flags:
Account information:
xiaonuo
ACL information:
192.168.100.173
192.168.100.174
192.168.100.175
192.168.100.176
192.168.200.173
192.168.200.174
192.168.200.175
192.168.200.176
[root@manager ~]# for i in {1..3}; do ssh node$i "yum -y install iscsi-initiator-utils"; done \\节点安装iscsi客户端软件
[root@node1 ~]# vim /etc/iscsi/iscsid.conf  \\所有节点配置文件加上以下3行,设置账户密码
node.session.auth.authmethod = CHAP
node.session.auth.username = xiaonuo
node.session.auth.password = 081ac67e74a6bb13b7a22b8a89e7177b
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.100.171"; done \\发现共享设备
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.200.171"; done
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m node -l"; done \\注册iscsi共享设备
Logging in to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.200.171,3260] (multiple)
Logging in to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.100.171,3260] (multiple)
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.200.171,3260] successful.
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.100.171,3260] successful.
……
[root@storage1 ~]# tgtadm --lld  iscsi --op show --mode conn --tid 1 \\iscsi服务器端查看共享情况
Session: 12
Connection: 0
Initiator: iqn.1994-05.com.redhat:a12e282371a1
IP Address: 192.168.200.175
Session: 11
Connection: 0
Initiator: iqn.1994-05.com.redhat:a12e282371a1
IP Address: 192.168.100.175
…….
[root@node1 ~]# netstat -nlatp | grep 3260
tcp        0      0 192.168.200.173:37946       192.168.200.171:3260        ESTABLISHED 37565/iscsid
tcp        0      0 192.168.100.173:54306       192.168.100.171:3260        ESTABLISHED 37565/iscsid
[root@node1 ~]# ll /dev/sd   \\在各个节点上面都会多出两个iscsi设备
sda sda1 sda2 sda3 sdb sdc

5、安装配置multipath多路冗余实现线路冗余
[root@manager ~]# for i in {1..3}; do ssh node$i "yum -y install device-mapper-*"; done
[root@manager ~]# for i in {1..3}; do ssh node$i "mpathconf --enable"; done \\生成配置文件
[root@node1 ~]# /sbin/scsi_id -g -u /dev/sdb \\查看导入设备的WWID
1storage1_id
[root@node1 ~]# /sbin/scsi_id -g -u /dev/sdc
1storage1_id
[root@node1 ~]# vim /etc/multipath.conf
multipaths {
multipath {
wwid                    1storage1_id  \\设置导出设备的WWID
alias                   iscsi1 \\设置别名
path_grouping_policy    multibus
path_selector           "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
}
[root@node1 ~]# /etc/rc.d/init.d/multipathd start
Starting multipathd daemon:                                [  OK  ]
[root@node1 ~]# ll /dev/mapper/iscsi1
lrwxrwxrwx 1 root root 7 Jun  7 23:58 /dev/mapper/iscsi1 -> ../dm-0
[root@node1 ~]# multipath –ll  \\查看绑定是否成功
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 20:0:0:1 sdb 8:16 active ready running
`- 19:0:0:1 sdc 8:32 active ready running
……\\其他两个节点同上

6、在节点上创建clvm逻辑卷并创建gfs2集群文件系统
[root@node1 ~]# pvcreate /dev/mapper/iscsi1  \\将多路冗余设备创建成pv
Writing physical volume data to disk "/dev/mapper/iscsi1"
Physical volume "/dev/mapper/iscsi1" successfully created
[root@node1 ~]# vgcreate cvg0 /dev/mapper/iscsi1 \\创建vg
Clustered volume group "cvg0" successfully created
[root@node1 ~]# lvcreate -L +1G cvg0 -n clv0 \\创建大小为1G的lv
Logical volume "clv0" created
[root@node1 ~]# lvs  \\从node1查看lv情况
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@node2 ~]# lvs  \\从node2查看lv情况
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@manager ~]# for i in {1..3}; do ssh node$i "lvmconf --enable-cluster"; done \\打开DLM锁机制,在web配置时候,如果勾选了“Enable Shared Storage Support”,则默认就打开了。
[root@node2 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t rsyslog:web /dev/cvg0/clv0  \\创建gfs2集群文件系统,并设置节点为3个,锁协议为lock_dlm
This will destroy any data on /dev/cvg0/clv0.
It appears to contain: symbolic link to `../dm-1'
Are you sure you want to proceed? [y/n] y
Device:                    /dev/cvg0/clv0
Blocksize:                 4096
Device Size                1.00 GB (262144 blocks)
Filesystem Size:           1.00 GB (262142 blocks)
Journals:                  3
Resource Groups:           4
Locking Protocol:          "lock_dlm"
Lock Table:                "rsyslog:web"
UUID:                      7c293387-b59a-1105-cb26-4ffc41b5ae3b

7、在storage2上创建storage1的mirror,实现备份及高可用
1)、创建跟storage1一样大的iscs空间2G,并配置targets.conf
[root@storage2 ~]# vim /etc/tgt/targets.conf
<target iqn.2013.05.org.rsyslog:storage2.sda5>
<backing-store /dev/sda5>
scsi_id storage2_id
scsi_sn storage2_sn
</backing-store>
incominguser xiaonuo 081ac67e74a6bb13b7a22b8a89e7177b
initiator-address 192.168.100.173
initiator-address 192.168.100.174
initiator-address 192.168.100.175
initiator-address 192.168.100.176
initiator-address 192.168.200.173
initiator-address 192.168.200.174
initiator-address 192.168.200.175
initiator-address 192.168.200.176
</target>

2)、各节点导入storage1设备
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.100.172"; done
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.200.172"; done
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
[root@manager ~]# for i in {1..3}; do ssh node$i "iscsiadm -m node -l"; done
Logging in to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.100.172,3260] (multiple)
Logging in to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.200.172,3260] (multiple)
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.100.172,3260] successful.
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.200.172,3260] successful.

3)、设置multipath
[root@node1 ~]# ll /dev/sd
sda   sda1  sda2  sda3  sdb   sdc   sdd   sde
[root@node1 ~]# /sbin/scsi_id -g -u /dev/sdd
1storage2_id
[root@node1 ~]# /sbin/scsi_id -g -u /dev/sde
1storage2_id
[root@node1 ~]# vim /etc/multipath.conf   \\其它两个节点配置类同
multipaths {
multipath {
wwid                    1storage1_id
alias                   iscsi1
path_grouping_policy    multibus
path_selector           "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
multipath {
wwid                    1storage2_id
alias                   iscsi2
path_grouping_policy    multibus
path_selector           "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
}
[root@node1 ~]# /etc/rc.d/init.d/multipathd reload
Reloading multipathd:                                      [  OK  ]
[root@node1 ~]# multipath -ll
iscsi2 (1storage2_id) dm-2 IET,VIRTUAL-DISK
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 21:0:0:1 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 22:0:0:1 sdd 8:48 active ready running
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 20:0:0:1 sdb 8:16 active ready running
`- 19:0:0:1 sdc 8:32 active ready running
4)、将新的iscsi设备加入卷组cvg0。
[root@node3 ~]# pvcreate /dev/mapper/iscsi2
Writing physical volume data to disk "/dev/mapper/iscsi2"
Physical volume "/dev/mapper/iscsi2" successfully created
[root@node3 ~]# vgextend cvg0 /dev/mapper/iscsi2
Volume group "cvg0" successfully extended
[root@node3 ~]# vgs
VG   #PV #LV #SN Attr   VSize VFree
cvg0   2   1   0 wz--nc 4.00g 3.00g

5)、安装cmirror,并在节点创建stoarge1的mirror为stoarage2
[root@manager ~]# for i in {1..3}; do ssh node$i "yum install cmirror -y"; done
[root@manager ~]# for i in {1..3}; do ssh node$i "/etc/rc.d/init.d/cmirrord start && chkconfig cmirrord on"; done
[root@node3 ~]# dmsetup ls –tree  \\没有创建mirror之前的状况
iscsi2 (253:2)
├─ (8:48)
└─ (8:64)
cvg0-clv0 (253:1)
└─iscsi1 (253:0)
├─ (8:32)
└─ (8:16)
[root@node3 ~]# lvs  \\没有创建mirror之前的状况
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@node3 ~]# lvconvert -m 1 /dev/cvg0/clv0 /dev/mapper/iscsi1 /dev/mapper/iscsi2 \\创建先有lv的mirror,以下可以看到数据在复制
cvg0/clv0: Converted: 0.4%
cvg0/clv0: Converted: 10.9%
cvg0/clv0: Converted: 18.4%
cvg0/clv0: Converted: 28.1%
cvg0/clv0: Converted: 42.6%
cvg0/clv0: Converted: 56.6%
cvg0/clv0: Converted: 70.3%
cvg0/clv0: Converted: 85.9%
cvg0/clv0: Converted: 100.0%
[root@node2 ~]# lvs \\创建过程中,storage1中的clv0正在想storage2复制相同内容
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.00g                         clv0_mlog   6.64
[root@node2 ~]# lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.00g                         clv0_mlog 100.00
[root@node3 ~]# dmsetup ls –tree \\查看现有iscsi导出设备的状态为mirror型
cvg0-clv0 (253:1)
├─cvg0-clv0_mimage_1 (253:5)
│  └─iscsi2 (253:2)
│     ├─ (8:48)
│     └─ (8:64)
├─cvg0-clv0_mimage_0 (253:4)
│  └─iscsi1 (253:0)
│     ├─ (8:32)
│     └─ (8:16)
└─cvg0-clv0_mlog (253:3)
└─iscsi2 (253:2)
├─ (8:48)
└─ (8:64)

转载请保留固定链接: https://linuxeye.com/configuration/1740.html

------分隔线----------------------------
标签:GFS2
栏目列表
推荐内容