Jeff Bailey on 19 Aug 2011 19:37:43 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

[PLUG] Degraded RAID


Hey all...

mdadm is showing the state of my 3-drive raid5 array as "clean, degraded". One of the devices is listed as "removed". (mdadm output pasted at end of email)

I can't find a nice, pedantic summary of what this means. I assume that I'm operating without parity, and if another drive dies, I'm toast?

Are there any options other than replace the drive?

And if I have to replace the drive, I can replace it with anything that's at least as large as the current one, right? It's 320G, so I can throw a 1TB drive in there as a replacement and grow the array at some later point?

Thanks for any input....

mdadm output:

/dev/md0:
        Version : 00.90.03
  Creation Time : Thu Dec  6 10:55:13 2007
     Raid Level : raid5
     Array Size : 625137152 (596.18 GiB 640.14 GB)
  Used Dev Size : 312568576 (298.09 GiB 320.07 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Aug 19 22:29:53 2011
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 88797d30:4bbe3cca:c7780c0e:bc15422d (local to host nas)
         Events : 0.51288

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
       2       8       49        2      active sync   /dev/sdd1
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug