1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > mdadm(管理磁盘阵列组)命令详解

mdadm(管理磁盘阵列组)命令详解

时间:2021-02-06 09:55:19

相关推荐

mdadm(管理磁盘阵列组)命令详解

mdadm命令来自于英文词组“multiple devices admin”的缩写,其功能是用于管理RAID磁盘阵列组。作为Linux系统下软RAID设备的管理神器,mdadm命令可以进行创建、调整、监控、删除等全套管理操作。

语法格式:mdadm [参数] 设备名

参数大全

-D 显示RAID设备的详细信息

-A 加入一个以前定义的RAID

-l 指定RAID的级别

-n 指定RAID中活动设备的数目

-f 把RAID成员列为有问题,以便移除该成员

-r 把RAID成员移出RAID设备

-a 向RAID设备中添加一个成员

-S 停用RAID设备,释放所有资源

-x 指定初始RAID设备的备用成员的数量

先创四个分区

[root@compute ~]# mdadm -C /dev/md7 -n 3 -l 5 -x 1 /dev/sde{1,2,3,4}mdadm: /dev/sde1 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 mdadm: /dev/sde2 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 mdadm: /dev/sde3 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 mdadm: /dev/sde4 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 mdadm: largest drive (/dev/sde4) exceeds size (2094080K) by more than 1%Continue creating array? ymdadm: Fail create md7 when using /sys/module/md_mod/parameters/new_arraymdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md7 started.

sde 8:64 0 10G 0 disk ├─sde1 8:65 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 ├─sde2 8:66 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 ├─sde3 8:67 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 └─sde4 8:68 0 4G 0 part └─md7 9:7 0 4G 0 raid5

[root@compute ~]# mdadm -D /dev/md7 /dev/md7:Version : 1.2Creation Time : Tue Mar 7 17:29:25 Raid Level : raid5Array Size : 4188160 (3.99 GiB 4.29 GB)Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Tue Mar 7 17:29:36 State : clean Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : compute:7 (local to host compute)UUID : dcdfacfb:9b2f17cd:ce176b58:9e7c56a4Events : 18Number Major Minor RaidDevice State0 8 65 0active sync /dev/sde11 8 66 1active sync /dev/sde24 8 67 2active sync /dev/sde33 8 68 -spare /dev/sde4

mdadm -S /dev/md7 //一次性停止有个硬盘的阵列组[root@compute ~]# mdadm -C /dev/md7 -x 1 -n 3 -l 5 /dev/sdd{1..4}mdadm: /dev/sdd1 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:25:51 mdadm: /dev/sdd2 appears to be part of a raid array:level=raid5 devices=3 ctime=Tue Mar 7 17:25:51 mdadm: largest drive (/dev/sdd4) exceeds size (2094080K) by more than 1%Continue creating array? yesmdadm: Fail create md7 when using /sys/module/md_mod/parameters/new_arraymdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md7 started.└─cinder--volumes-cinder--volumes--pool 253:5 0 69G 0 lvm sdd 8:48 0 10G 0 disk ├─sdd1 8:49 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 ├─sdd2 8:50 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 ├─sdd3 8:51 0 2G 0 part │ └─md7 9:7 0 4G 0 raid5 └─sdd4 8:52 0 4G 0 part └─md7 9:7 0 4G 0 raid5 sde 8:64 0 10G 0 disk

[root@compute ~]# mdadm -D /dev/md7 /dev/md7:Version : 1.2Creation Time : Wed Mar 8 14:52:00 Raid Level : raid5Array Size : 4188160 (3.99 GiB 4.29 GB)Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Wed Mar 8 14:52:11 State : clean Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : compute:7 (local to host compute)UUID : 700d9a77:0babc26d:3b5a6768:223d47a8Events : 18Number Major Minor RaidDevice State0 8 49 0active sync /dev/sdd11 8 50 1active sync /dev/sdd24 8 51 2active sync /dev/sdd33 8 52 -spare /dev/sdd4

[root@compute ~]# mdadm /dev/md7 -f /dev/sdd1mdadm: set /dev/sdd1 faulty in /dev/md7[root@compute ~]# mdadm -D /dev/md7 /dev/md7:Version : 1.2Creation Time : Wed Mar 8 14:52:00 Raid Level : raid5Array Size : 4188160 (3.99 GiB 4.29 GB)Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Wed Mar 8 14:59:00 State : clean, degraded, recovering Active Devices : 2Working Devices : 3Failed Devices : 1Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 66% completeName : compute:7 (local to host compute)UUID : 700d9a77:0babc26d:3b5a6768:223d47a8Events : 30Number Major Minor RaidDevice State3 8 52 0spare rebuilding /dev/sdd41 8 50 1active sync /dev/sdd24 8 51 2active sync /dev/sdd30 8 49 -faulty /dev/sdd1

[root@compute ~]# mdadm /dev/md7 -r /dev/sdd1mdadm: hot removed /dev/sdd1 from /dev/md7[root@compute ~]# mdadm -D /dev/md7 /dev/md7:Version : 1.2Creation Time : Wed Mar 8 14:52:00 Raid Level : raid5Array Size : 4188160 (3.99 GiB 4.29 GB)Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)Raid Devices : 3Total Devices : 3Persistence : Superblock is persistentUpdate Time : Wed Mar 8 15:00:22 State : clean Active Devices : 3Working Devices : 3Failed Devices : 0Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : compute:7 (local to host compute)UUID : 700d9a77:0babc26d:3b5a6768:223d47a8Events : 38Number Major Minor RaidDevice State3 8 52 0active sync /dev/sdd41 8 50 1active sync /dev/sdd24 8 51 2active sync /dev/sdd3

[root@compute ~]# mdadm /dev/md7 -a /dev/sdd1mdadm: added /dev/sdd1[root@compute ~]# mdadm -D /dev/md7 /dev/md7:Version : 1.2Creation Time : Wed Mar 8 14:52:00 Raid Level : raid5Array Size : 4188160 (3.99 GiB 4.29 GB)Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentUpdate Time : Wed Mar 8 15:01:29 State : clean Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncName : compute:7 (local to host compute)UUID : 700d9a77:0babc26d:3b5a6768:223d47a8Events : 39Number Major Minor RaidDevice State3 8 52 0active sync /dev/sdd41 8 50 1active sync /dev/sdd24 8 51 2active sync /dev/sdd35 8 49 -spare /dev/sdd1

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。