Velvet Star Monitor

Standout celebrity highlights with iconic style.

news

isw_raid_member (Intel Rapid Storage) on Debian

Writer Andrew Mclaughlin

I have 3 HDD's configured as an Intel Rapid Storage RAID5, configured by my desktop's motherboard (Asus P8Z77-V LX) integrated firmware via BIOS. The RAID is then formatted with an NTFS partition.

In autumn I moved the drives to a headless Debian 8 home server, running mdadm 3.2. It was able to automatically recognize and create a device for the RAID without issue.

However, I can't get any more recent version of mdadm (3.3+) to recognize the RAID. I eventually tried erasing the superblocks and recreating a Linux RAID5, but that lead to a completely broken filesystem, to the point that even raw data recovery only found broken files (an issue might be that Intel Rapid Storage computes stripe chunk sizes in KIBIbytes, whereas mdadm only creates stripes with chunks in KILObytes).

I have since recovered the data by moving the disks back to my desktop and recreating an Intel RAID on top of them, as originally was.

I would however like to use updated versions of mdadm and not just pin it to 3.2, if anything because they're required to upgrade to Debian 9. Does anybody know how to go about it?

The following is from my desktop system, Arch, running mdadm 4.0 and with BIOS RAID support disabled, so it just sees the physical drives. RAID devices are sdc, sdd, sde, as you can see.

[root@desktop-linux fabrizio]# lsblk -o +FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE
sda 8:0 0 111,8G 0 disk
164375471010
└─sda1 8:1 0 111,8G 0 part / ext4
sdb 8:16 0 119,2G 0 disk
├─sdb1 8:17 0 500M 0 part ntfs
└─sdb2 8:18 0 118,8G 0 part /mnt/win10_os ntfs
sdc 8:32 0 1,8T 0 disk isw_raid_member
sdd 8:48 0 1,8T 0 disk isw_raid_member
sde 8:64 0 1,8T 0 disk isw_raid_member
sdf 8:80 0 465,8G 0 disk
└─sdf1 8:81 0 465,8G 0 part /mnt/win10_utilities ntfs

break

[root@desktop-linux fabrizio]# mdadm --examine /dev/sd[cde]
/dev/sdc: Magic : Intel Raid ISM Cfg Sig. Version : 1.3.00 Orig Family : 0028b89b Family : 0028b89b Generation : 00000062 Attributes : All supported UUID : 137d7329:b874d09c:ecb307ad:bfd6b70a Checksum : 1468caa4 correct MPB Sectors : 2 Disks : 3 RAID Devices : 1 Disk00 Serial : WD-WCC4M4EYDENC State : active Id : 00000002 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)
[Volume0]: UUID : 92c1c9bd:d1701a29:94bf5aa6:be8fd1d8 RAID Level : 5 <-- 5 Members : 3 <-- 3 Slots : [UUU] <-- [UUU] Failed disk : none This Slot : 0 Array Size : 7814047744 (3726.03 GiB 4000.79 GB) Per Dev Size : 3907024136 (1863.01 GiB 2000.40 GB) Sector Offset : 0 Num Stripes : 15261812 Chunk Size : 128 KiB <-- 128 KiB Reserved : 0 Migrate State : initialize Map State : normal <-- uninitialized Checkpoint : 567605 (768) Dirty State : clean Disk01 Serial : WD-WCC4M5AK581C State : active Id : 00000003 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB) Disk02 Serial : WD-WCC4M5AK5JKY State : active Id : 00000004 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)
/dev/sdd: Magic : Intel Raid ISM Cfg Sig. Version : 1.3.00 Orig Family : 0028b89b Family : 0028b89b Generation : 00000062 Attributes : All supported UUID : 137d7329:b874d09c:ecb307ad:bfd6b70a Checksum : 1468caa4 correct MPB Sectors : 2 Disks : 3 RAID Devices : 1 Disk01 Serial : WD-WCC4M5AK581C State : active Id : 00000003 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)
[Volume0]: UUID : 92c1c9bd:d1701a29:94bf5aa6:be8fd1d8 RAID Level : 5 <-- 5 Members : 3 <-- 3 Slots : [UUU] <-- [UUU] Failed disk : none This Slot : 1 Array Size : 7814047744 (3726.03 GiB 4000.79 GB) Per Dev Size : 3907024136 (1863.01 GiB 2000.40 GB) Sector Offset : 0 Num Stripes : 15261812 Chunk Size : 128 KiB <-- 128 KiB Reserved : 0 Migrate State : initialize Map State : normal <-- uninitialized Checkpoint : 567605 (768) Dirty State : clean Disk00 Serial : WD-WCC4M4EYDENC State : active Id : 00000002 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB) Disk02 Serial : WD-WCC4M5AK5JKY State : active Id : 00000004 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)
/dev/sde: Magic : Intel Raid ISM Cfg Sig. Version : 1.3.00 Orig Family : 0028b89b Family : 0028b89b Generation : 00000062 Attributes : All supported UUID : 137d7329:b874d09c:ecb307ad:bfd6b70a Checksum : 1468caa4 correct MPB Sectors : 2 Disks : 3 RAID Devices : 1 Disk02 Serial : WD-WCC4M5AK5JKY State : active Id : 00000004 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)
[Volume0]: UUID : 92c1c9bd:d1701a29:94bf5aa6:be8fd1d8 RAID Level : 5 <-- 5 Members : 3 <-- 3 Slots : [UUU] <-- [UUU] Failed disk : none This Slot : 2 Array Size : 7814047744 (3726.03 GiB 4000.79 GB) Per Dev Size : 3907024136 (1863.01 GiB 2000.40 GB) Sector Offset : 0 Num Stripes : 15261812 Chunk Size : 128 KiB <-- 128 KiB Reserved : 0 Migrate State : initialize Map State : normal <-- uninitialized Checkpoint : 567605 (768) Dirty State : clean Disk00 Serial : WD-WCC4M4EYDENC State : active Id : 00000002 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB) Disk01 Serial : WD-WCC4M5AK581C State : active Id : 00000003 Usable Size : 3907024136 (1863.01 GiB 2000.40 GB)

mdadm --detail --scan returns an empty string.

2

1 Answer

In my case, I'm not sure how it worked on Debian 8, but on Debian 10, Intel RST is handled by dmraid, and automatically assembled a virtual block device at /dev/mapper/isw_<10lettershere>_Volume_0000

dmesg showed these relevant lines:

[21120.363597] md/raid:mdX: device sda operational as raid disk 0
[21120.363600] md/raid:mdX: device sdb operational as raid disk 1
[21120.363601] md/raid:mdX: device sdc operational as raid disk 2
[21120.363602] md/raid:mdX: device sdd operational as raid disk 3
[21120.364249] md/raid:mdX: raid level 5 active with 4 out of 4 devices, algorithm 0

dmsetup listed the device:

rsaxvc@localghost:/$ sudo dmsetup ls
isw_bcegjbdfjj_Volume_0000 (254:0)

Probing the block device for partitions with partprobe found them:

rsaxvc@localghost:/$ sudo partprobe /dev/mapper/isw_<10letters>_Volume_0000
rsaxvc@localghost:/$ ls /dev/mapper/isw_* /dev/mapper/isw_<10letters>_Volume_0000 /dev/mapper/isw_<10letters>_Volume_0000p1 /dev/mapper/isw_<10letters>_Volume_0000p2

At this point I was able to mount one of the partitions that another OS had created on the Intel RST volume under Debian 10. These are the relevant versions that came with Debian 10 on my machine:

rsaxvc@localghost:~$ dmraid --version
dmraid version: 1.0.0.rc16 (2009.09.16) shared
dmraid library version: 1.0.0.rc16 (2009.09.16)
device-mapper version: 4.39.0
rsaxvc@localghost:~$ uname -a
Linux localghost 4.19.0-13-amd64 #1 SMP Debian 4.19.160-2 (2020-11-28) x86_64 GNU/Linux
rsaxvc@localghost:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy