setrcreations.blogg.se

Softraid error
Softraid error










softraid error

recovery = 11.5% (12284/101888) finish=0.Unable to mount xxx, Input/output error ,SoftRAID/FakeRAID hardware.

softraid error

Mdadm: /dev/md1 has been started with 3 drives (out of 5) and 1 spare. Mdadm: clearing FAULTY flag for device 4 in /dev/md1 for /dev/mapper/sdv1 Mdadm: forcing event count in /dev/mapper/sdq1(1) from 143 upto 148 $ mdadm -assemble -force /dev/md1 $OVERLAYS So we are interested in assembling a RAID with the devices that were active last (sdu1, sdw1) and the last to fail (sdq1).īy forcing the assembly you can make mdadm clear the faulty state: dev/mapper/sdw1 Device Role : Active device 4 dev/mapper/sdu1 Device Role : Active device 2 dev/mapper/sdt1 Device Role : Active device 3 # 1st to fail dev/mapper/sds1 Device Role : Active device 0 # 2nd to fail dev/mapper/sdq1 Device Role : Active device 1 # 3rd to fail $ parallel -tag -k mdadm -E ::: $OVERLAYS|grep -E 'Role' Looking at each harddisk's Role it is clear that the 3 devices that failed were indeed data devices. dev/mapper/sdt1 Update Time : Sat May 4 15:29:47 2013 # 1st to fail dev/mapper/sds1 Update Time : Sat May 4 15:32:03 2013 # 2nd to fail dev/mapper/sdq1 Update Time : Sat May 4 15:32:43 2013 # 3rd to fail $ parallel -tag -k mdadm -E ::: $OVERLAYS|grep -E 'Update' The Update time tells us which drive failed when: & losetup -d $(losetup -j $b.ovr | cut -d : -f1) & dmsetup remove $b & echo /dev/mapper/$b Size_bkl=$(blockdev -getsz $d) # in 512 blocks/sectors We use the $UUID to identify the new device names: After the re-seating/rebooting the failed harddisks will often be given different device names. That can be done by re-seating the harddisks (if they are hotswap) or by rebooting. The failed harddisks are right now kicked off by the kernel and not visible anymore, so you need to make the kernel re-discover the devices. Take the UUID from one of the non-failed harddisks (here /dev/sdj1): This is especially important if you have multiple RAIDs connected to the system. We will need the UUID of the array to identify the harddisks. GNU Parallel - If it is not packaged for your distribution install by: The goal is to get back to state 3 with minimal data loss. This is the situation we are going to recover from. Md0 : active raid6 sdn1(S) sdm1 sdk1(F) sdj1 sdh1(F) sdg1(F) The rebuild on /dev/sdn1 cannot continue, so /dev/sdn1 reverts to its status as spare (state 4): recovery = 59.0% (60900/101888) finish=0.6min speed=1018K/secīefore the rebuild finishes, yet another data harddisk (/dev/sdh1) fails, thus failing the RAID. Md0 : active raid6 sdn1 sdm1 sdk1(F) sdj1 sdh1 sdg1(F) The rebuild on /dev/sdn1 continues (state 3): Now all redundancy is lost, and losing another data disk will fail the RAID. Md0 : active raid6 sdn1 sdm1 sdk1(F) sdj1 sdh1 sdg1 Md0 : active raid6 sdn1(S) sdm1 sdk1 sdj1 sdh1 sdg1ģ05664 blocks super 1.2 level 6, 512k chunk, algorithm 2 įor some unknown reason /dev/sdk1 fails and rebuild starts on the spare /dev/sdn1 (state 2):

softraid error

It starts out as a perfect RAID6 (state 1): This article will deal with the following case. 6 Making the harddisks read-only using an overlay file.












Softraid error