gpfs 文件系统
大约 2 分钟
gpfs 文件系统
基本配置
io 节点设备对应的 gpfs nsd 盘
[root@io01 ~]# mmlsnsd -m Disk name NSD volume ID Device Node name or Class Remarks --------------------------------------------------------------------------------------- nsd01 0A000D6965B4F0F6 /dev/mapper/nsd01 io01 server node nsd01 0A000D6965B4F0F6 /dev/mapper/nsd01 io02 server node nsd02 0A000D6A65B4F0F8 /dev/mapper/nsd02 io01 server node nsd02 0A000D6A65B4F0F8 /dev/mapper/nsd02 io02 server node
查看集群配置
[root@io01 ~]# mmlsconfig Configuration data for cluster cluster1.spectrum: ------------------------------------------------- clusterName cluster1.spectrum clusterId 5550626004876455419 dmapiFileHandleSize 32 minReleaseLevel 5.0.5.1 ccrEnabled yes cipherList AUTHONLY [io01,io02] pagepool 128G [common] autoload yes verbsRdma enable [io01,io02] verbsPorts mlx5_2/1 [mgt,n01,n02,n03,n04] verbsPorts mlx5_0/1 [common] adminMode central File systems in cluster cluster1.spectrum: ------------------------------------------ /dev/share
查看集群信息
[root@io01 ~]# mmlscluster GPFS cluster information ======================== GPFS cluster name: cluster1.spectrum GPFS cluster id: 5550626004876455419 GPFS UID domain: cluster1.spectrum Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: CCR Node Daemon node name IP address Admin node name Designation ------------------------------------------------------------------- 1 io01 10.0.13.105 io01 quorum-manager 2 io02 10.0.13.106 io02 quorum-manager 3 mgt 10.0.13.100 mgt quorum 4 n01 10.0.13.130 n01 5 n02 10.0.13.131 n02 6 n03 10.0.13.132 n03 7 n04 10.0.13.133 n04
存储状态信息
[root@io01 ~]# mmgetstate -a Node number Node name GPFS state ------------------------------------------- 1 io01 active 2 io02 active 3 mgt active 4 n01 active 5 n02 active 6 n03 active 7 n04 active
磁盘信息查看
[root@io01 ~]# mmlsdisk share disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ----------- -------- ----- ------------- ------------ ------------ nsd01 nsd 512 11 Yes Yes ready up system nsd02 nsd 512 12 Yes Yes ready up system
存储空间查看
[root@io01 ~]# mmdf share disk disk size failure holds holds free in KB free in KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ----------------- Disks in storage pool: system (Maximum disk size allowed is 1.44 PB) nsd01 195881336832 11 Yes Yes 195708801024 (100%) 7326440 ( 0%) nsd02 195881336832 12 Yes Yes 195707568128 (100%) 7502152 ( 0%) ------------- -------------------- ----------------- (pool total) 391762673664 391416369152 (100%) 14828592 ( 0%) ============= ==================== ================= (total) 391762673664 391416369152 (100%) 14828592 ( 0%) Inode Information ----------------- Number of used inodes: 1459534 Number of free inodes: 942770 Number of allocated inodes: 2402304 Maximum number of inodes: 134217728
常规操作
检查存储状态
[root@mgt ~]# psh all "df -hT | grep gpfs" io02: share gpfs 365T 331G 365T 1% /share io01: share gpfs 365T 331G 365T 1% /share n02: share gpfs 365T 331G 365T 1% /share n01: share gpfs 365T 331G 365T 1% /share n03: share gpfs 365T 331G 365T 1% /share n04: share gpfs 365T 331G 365T 1% /share 如果没有挂载,需要在节点上运行: mmstartup mmmount share
另一种挂载存储方法
mmstartup -a mmmount share -a
存储管理
启动分布式存储:
mmstartup -a 注: 如果是本节点启动, 只需要 mmstartup 即可
检查分布式存储状态:
mmgetstate -a
查看分布式文件系统配置
mmlsconfig
查看文件系统参数
mmlsfs share
挂载分布式文件系统
mmmount all 或 mmmount all -a # 所有节点 mount 查看挂载: mmlsmount all -L
卸载分布式文件系统
mmumount all
显示分布式存储集群配置
mmlscluster
查看分布式存储 nsd 信息:
mmlsnsd -m