GPFS的安装、配置、管理.docx

上传人:啊飒飒 文档编号:10768451 上传时间:2021-06-03 格式:DOCX 页数:13 大小:44.93KB
返回 下载 相关 举报
GPFS的安装、配置、管理.docx_第1页
第1页 / 共13页
GPFS的安装、配置、管理.docx_第2页
第2页 / 共13页
GPFS的安装、配置、管理.docx_第3页
第3页 / 共13页
GPFS的安装、配置、管理.docx_第4页
第4页 / 共13页
GPFS的安装、配置、管理.docx_第5页
第5页 / 共13页
点击查看更多>>
资源描述

《GPFS的安装、配置、管理.docx》由会员分享,可在线阅读,更多相关《GPFS的安装、配置、管理.docx(13页珍藏版)》请在三一文库上搜索。

1、 实用标准文档GPFS 的安装、配置、管理文案大全 实用标准文档目 录1 GPFS安装、配置、管理和维护 . 11.1 GPFS安装 . 11.2 GPFS配置 . 11.2.1 集群节点配置. 11.2.2 GPFS NSD磁盘配置 . 11.2.3 优化 GPFS 集群配置 . 31.2.4 创建 GPFS 文件系统 . 41.2.5 配置变更. 41.3 GPFS管理和维护 . 8文案大全 实用标准文档1 GPFS安装、配置、管理和维护本部分围绕 XX公司 HPIS系统实际情况,描述 GPFS的安装、配置过程,以及 GPFS管理和维护方法。1.1 GPFS安装GPFS 3.3及补丁使用

2、AIX系统标准 installp方式安装,安装命令示例:# installp -agYXd . all查看安装结果;# lslpp -l |grep gpfsgpfs.base3.3.0.16 COMMITTED GPFS File Manager3.3.0.16 COMMITTED GPFS File Manager3.3.0.1 COMMITTED GPFS Server Manpages andgpfs.basegpfs.docs.data1.2 GPFS配置1.2.1集群节点配置 准备 node列表文件,节点属性:manager或 client, quorum或 nonquorum#

3、more /home/GPFS/nodeshpis1:manager-quorumhpis2:manager-quorum 创建 RAC两节点的 GPFS集群# mcrcluster -N /home/GPFS/nodes -p hpis1 -s hpis2 -r /usr/bin/rsh -R/usr/bin/rcp -C cls_hpis-p,-s分别表示主,辅集群管理服务器-r /usr/bin/rsh -R /usr/bin/rcp表示 rsh,rcp方式进行集群管理时使用 rsh,rcp管理方式;也可以通过配置 ssh,使用 ssh,scp方式 查看集群配置#mmlscluster1

4、.2.2GPFS NSD磁盘配置因为RAC只有两个Server节点,为保证GPFS文件系统的高可用性,需要将两台Server均设为 quorum节点,同时从两个存储和一台 Server的本地盘中分别取一个盘作为tiebreaker quorum Disk。 NSD准备生成 NSD文件,格式如:文案大全 实用标准文档# more /home/GPFS/nsdhdisk3:hpis2:descOnly:1:nsd00:hdisk29:dataAndMetadata:2:nsd01:hdisk30:dataAndMetadata:2:nsd02:hdisk31:dataAndMetadata:2:n

5、sd03:hdisk32:dataAndMetadata:2:nsd04:hdisk33:dataAndMetadata:2:nsd05:hdisk59:dataAndMetadata:3:nsd06:hdisk60:dataAndMetadata:3:nsd07:hdisk61:dataAndMetadata:3:nsd08:hdisk62:dataAndMetadata:3:nsd09:hdisk63:dataAndMetadata:3:nsd10:注:1. 这里分别将 hpis2节点的本地盘,两个存储的 failgroup分别设为 1,2,32. hdisk3本地盘设为 descOnly

6、,存储 Disk设为 dataAndMetadata创建 NSD/home/GPFS/nsd# mmcrnsd -F-v yes生成 NSD后,/home/GPFS/nsd会被改写,如:# hdisk3:descOnly:1:nsd00:nsd00:descOnly:1:# hdisk29:dataAndMetadata:2:nsd01:nsd01:dataAndMetadata:2:# hdisk30:dataAndMetadata:2:nsd02:nsd02:dataAndMetadata:2:# hdisk31:dataAndMetadata:2:nsd03:nsd03:dataAndM

7、etadata:2:# hdisk32:dataAndMetadata:2:nsd04:nsd04:dataAndMetadata:2:# hdisk33:dataAndMetadata:2:nsd05:nsd05:dataAndMetadata:2:# hdisk59:dataAndMetadata:3:nsd06:nsd06:dataAndMetadata:3:# hdisk60:dataAndMetadata:3:nsd07:nsd07:dataAndMetadata:3:# hdisk61:dataAndMetadata:3:nsd08:nsd08:dataAndMetadata:3:

8、# hdisk62:dataAndMetadata:3:nsd09:nsd09:dataAndMetadata:3:# hdisk63:dataAndMetadata:3:nsd10:nsd10:dataAndMetadata:3:文案大全 实用标准文档1.2.3优化 GPFS 集群配置查看当前集群配置# mmlsclusterGPFS cluster information=GPFS cluster name:cls_hpis.hpis1752142207565323869cls_hpis.hpis1/usr/bin/rshGPFS cluster id:GPFS UID domain:Re

9、mote shell command:Remote file copy command: /usr/bin/rcpGPFS cluster configuration servers:-Primary server:hpis2Secondary server: hpis1Node Daemon node nameDesignationIP addressAdmin node name-1 hpis1quorum-manager2 hpis210.1.1.90hpis110.1.1.91hpis2quorum-manager# mmlsconfigConfiguration data for c

10、luster cls_hpis.hpis1:-clusterName cls_hpis.hpis1clusterId 752142207565323869autoload yesminReleaseLevel 3.3.0.2dmapiFileHandleSize 32maxblocksize 8MmaxFilesToCache 16384maxStatCache 65536maxMBpS 8192pagepool 2048MpagepoolMaxPhysMemPct 80tiebreakerDisks nsd00;nsd01;nsd06文案大全 实用标准文档failureDetectionTi

11、me 10adminMode centralFile systems in cluster cls_hpis.hpis1:-/dev/oradata此处 pagepool与 tiebreakerDisks 参数最重要 修改集群配置命令:mmchconfig,有些配置需要先 shutdown GPFS集群#mmchconfig pagepool=3072M#mmchconfig tiebreakerDisks=nsd00;nsd01;nsd061.2.4创建 GPFS 文件系统# mmcrfs oradata -F /home/GPFS/nsd -T /oradata -A yes -K alw

12、ays -B 2m -E no-m 2 -M 2 -n 32 -Q no -r 2 -R 2 -S yes -v no注: 其 mount点为/oradata, blocksize为 2m;注意此参数已经创建,不可修改,blocksize一般要参照存储端 LUN设置,以获得最优性能; -m 2 -M 2表示写两份 metadata数据,-r 2 -R 2表示写两份 data数据; 我们已经将数据盘的 failgroup分别设为 2、3,GPFS会自动将数据均衡写入不同 failgroup中;1.2.5配置变更1.2.5.1节点变更 Cluster和 filesystem manager角色当前

13、 manager角色如下# mmlsmgrfile system- -oradata 10.1.1.90 (hpis1)manager nodeCluster manager node: 10.1.1.90 (hpis1) 变更 Cluster或 filesystem manager角色为 hpis2#mmchmgr oradata hpis2#mmchmgr -c hpis2文案大全 实用标准文档增加节点使用 mmaddnode命令增加,如:mmaddnode N othernode1 节点 quorum属性变更# mmchnode -quorum -N othernode1# mmchno

14、de -nonquorum -N othernode2# mmchnode -manager -N othernode1# mmchnode -client -N othernode1注:目前 RAC两个的属性应均设为 manger, quorum,若新增 server节点,可设为manger, quorum,新增 client节点建议设为 client,nonquorum1.2.5.2 NSD disk变更 增加 NSD disk新增 NSD时,请从两个存储中挑选 disk,成对增加,在 AIX系统中可以通过 lscfg vp|grep hdisk查看某个 hdisk来源与哪个存储,如:#

15、lscfg -vp|grep hdiskhdisk4U78AA.001.WZSGP8Z-P1-C4-T1-W20140080E518F286-L1000000000000 MPIODS5020 Disk.hdisk34U78AA.001.WZSGP8Z-P1-C4-T1-W20140080E518E3DA-L1000000000000 MPIODS5020 Disk通过蓝色部分可以区分 hdisk的来源,增加 hdisk时,要注意 failgroup的设置和现有来自同存储的 failgroup一样编写待新增 nsd文件,如:# more /home/GPFS/nsd2hdisk28:dataA

16、ndMetadata:2:nsd11:hdisk58:dataAndMetadata:3:nsd12:# mmcrnsd -F /home/GPFS/nsd2 -v yesmmcrnsd: Processing disk hdisk28mmcrnsd: Processing disk hdisk58mmcrnsd: Propagating the cluster configuration data to allaffected nodes. This is an asynchronous process.# more /home/GPFS/nsd2# hdisk28:dataAndMetad

17、ata:2:nsd11:nsd11:dataAndMetadata:2:# hdisk58:dataAndMetadata:3:nsd12:nsd12:dataAndMetadata:3: 将新 NSD增加到文件系统中# mmadddisk oradata -F /home/GPFS/nsd2文案大全 实用标准文档The following disks of oradata will be formatted on node hpis1:nsd11: size 209715200 KBnsd12: size 209715200 KBExtending Allocation MapCheckin

18、g Allocation Map for storage pool systemCompleted adding disks to file system oradata.mmadddisk: Propagating the cluster configuration data to allaffected nodes. This is an asynchronous process. 当前 NSD列表# mmlsnsd -aLFile system Disk nameNSD volume IDNSD servers-oradataoradataoradataoradataoradataora

19、dataoradataoradataoradataoradataoradataoradataoradatansd00nsd01nsd02nsd03nsd04nsd05nsd06nsd07nsd08nsd09nsd10nsd11nsd120A70255B4E8260E1 hpis20A70255B4E826051 (directly attached)0A70255B4E826052 (directly attached)0A70255B4E826053 (directly attached)0A70255B4E826054 (directly attached)0A70255B4E826055

20、 (directly attached)0A70255B4E826056 (directly attached)0A70255B4E826057 (directly attached)0A70255B4E826058 (directly attached)0A70255B4E826059 (directly attached)0A70255B4E82605A (directly attached)0A70255B4E855BFB (directly attached)0A70255B4E855BFC (directly attached) 当前/oradata文件系统的 disk列表# mml

21、sdisk oradata -Ldiskdriver sector failure holdsholdsstoragenametypesize group metadata data statusremarksavailability disk id pool- - - - - - - - - -nsd00nsd5121 nonoreadyupup1 systemnsd01descdescnsd5122 yesyes ready2 system文案大全 实用标准文档nsd02nsdnsdnsdnsdnsdnsdnsdnsdnsdnsdnsd512512512512512512512512512

22、5125122 yesyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyupupupupupupupupupupup3 systemnsd032 yes2 yes2 yes3 yes3 yes3 yes3 yes3 yes2 yes3 yes4 systemnsd045 systemnsd056 systemnsd067 systemnsd07desc8 systemnsd089 systemnsd0910 systemnsd1011 system

23、nsd1112 systemnsd1213 systemNumber of quorum disks: 3Read quorum value:Write quorum value:22当前/oradata文件系统大小即状态# mmdf oradatadiskdisk size failure holdsholdsfree KBfree KBnamein KBgroup metadata datain full blocksin fragments- - - - - -Disks in storage pool: system (Maximum disk size allowed is 4.5

24、TB)nsd001433739522097152002097152002097152001 nono0 ( 0%)208898048 (100%)208902144 (100%)208889856 (100%)0 ( 0%)nsd012 yes2 yes2 yesyesyesyes5248 ( 0%)nsd024352 ( 0%)nsd035888 ( 0%)文案大全 实用标准文档nsd04209715200209715200209715200209715200209715200209715200209715200209715200209715200-2 yesyesyesyesyesyesy

25、esyesyesyes208887808 (100%)208896000 (100%)209711104 (100%)208902144 (100%)208891904 (100%)208885760 (100%)208893952 (100%)208900096 (100%)209711104 (100%)4416 ( 0%)nsd052 yes2 yes3 yes3 yes3 yes3 yes3 yes3 yes5632 ( 0%)nsd111984 ( 0%)nsd076400 ( 0%)nsd084544 ( 0%)nsd093072 ( 0%)nsd107872 ( 0%)nsd06

26、3648 ( 0%)nsd121984 ( 0%)- -(pool total)55040 ( 0%)26599563522508369920 ( 94%)2508369920 ( 94%)= =(total)265995635255040 ( 0%)Inode Information-Number of used inodes:Number of free inodes:Number of allocated inodes:Maximum number of inodes:409552838553248021880191.3 GPFS管理和维护启动所有/单个节点mmstartup ammst

27、artup -N hpis1文案大全 实用标准文档mmstartup -N hpis2 Mount 所有/单个节点文件系统mmmount all ammmount all umount 所有/单个节点文件系统mmumount all -ammumount all 停止所有/单个节点文件系统mmshutdown -ammshutdown N hpis1mmshutdown N hpis2 查看 GPFS状态# mmgetstate -LasNode number Node nameRemarksQuorum Nodes up Total nodes GPFS state-1quorum node

28、2hpis11*22activehpis21*22activequorum nodeSummary information-Number of nodes defined in the cluster:22022Number of local nodes active in the cluster:Number of remote nodes joined in this cluster:Number of quorum nodes defined in the cluster:Number of quorum nodes active in the cluster:Quorum = 1*,

29、Quorum achieved 查看文件系统 mount状态# mmlsmount all -LFile system oradata is mounted on 2 nodes:10.1.1.9010.1.1.91hpis1hpis2 监控 GPFS文件系统 IO情况(mmpmon)如:mmpmon -i /home/mon_gpfs -d 2000 -r 1000 -s -t 60文案大全 实用标准文档存储故障恢复后手工同步文件系统存储发生故障后,故障存储的 NSD将变更 down状态,mmlsdisk查看结果示例如下:# mmlsdisk oradata -Ldiskdriver sec

30、tor failure holdsholdsstoragenametypesize group metadata data statusremarksavailability disk id pool- - - - - - - - - -nsd00nsdnsdnsdnsdnsdnsdnsdnsdnsdnsdnsd5125125125125125125125125125125121 nonoreadyup1 systemnsd01descdescdesc2 yes2 yes2 yes2 yes2 yes3 yes3 yes3 yes3 yes3 yesyes readyyes readyyes

31、readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyup2 systemnsd02up3 systemnsd03up4 systemnsd04up5 systemnsd05up6 systemnsd06downdowndowndowndown7 systemnsd078 systemnsd089 systemnsd0910 systemnsd1011 systemNumber of quorum disks: 3Read quorum value:Write quorum value:22故障存储恢复后,我们可

32、以选择合适时机对每个处于 down状态的 disk进行手工同步文件系统,命令如下:# mmchdisk oradata start -d nsd06# mmchdisk oradata start -d nsd07# mmchdisk oradata start -d nsd08# mmchdisk oradata start -d nsd09# mmchdisk oradata start -d nsd10通过后结果如下:文案大全 实用标准文档# mmlsdisk oradata -Ldiskdriver sector failure holdsholdsstoragenametypesiz

33、e group metadata data statusremarksavailability disk id pool- - - - - - - - - -nsd00nsdnsdnsdnsdnsdnsdnsdnsdnsdnsdnsd5125125125125125125125125125125121 nonoreadyupupupupupupupupupupup1 systemnsd01descdesc2 yes2 yes2 yes2 yes2 yes3 yes3 yes3 yes3 yes3 yesyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes readyyes ready2 systemnsd023 systemnsd034 systemnsd045 systemnsd056 systemnsd067 systemnsd07desc8 systemnsd089 systemnsd0910 systemnsd1011 systemNumber of quorum disks: 3Read quorum value:Write quorum value:22文案大全

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 科普知识


经营许可证编号:宁ICP备18001539号-1