Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Fun with SVM RAID 5 2

Status
Not open for further replies.

KenCunningham

Technical User
Mar 20, 2001
8,475
GB
Good afternoon folks. We've hit a strange issue in trying to replace a disk in a RAID 5 component, but when attempting to do so, we are getting the following message:

Code:
# metareplace: <servername>: d7: c1t8d0s7: can't find component in unit

with c1t8d0s7 being the disk in maintenance according to metastat.

Has anyone seen this and come up with a resolution? It's as if the device is holding on to the disk despite the fact it's failed (it still appears as in need of maintenance in metastat). I've searched for the message using google, but the only really relevant things I've seen refer to mirrors and not RAID 5 devices. Any help/advice gratefully received.

My own thoughts are that we're probably looking at a rebuild but I wanted to check all avenues before departing down that lonely path.

The internet - allowing those who don't know what they're talking about to have their say.
 
Thanks John, I saw that (it seems to be replicated in quite a few places), but it refers to a mirrored configuration wheras we're looking at RAID 5. I think the prociples are pretty much the same and intend to follow the following procedure to (I hope) correct the situation:

As this is a V880:

luxadm remove_device -F /dev/rdsk/c1t8d0s2

devfsadm -C -c disk

Remove disk and insert the new one

devfsadm

fmthard -s saved.vtoc /dev/rdsk/c1t8d0s2

metareplace -d d7 c1t8d0s7

If anyone can see any glaring (or even not-so-glaring) problems with this procedure, please feel free to post - I have a window to do this work on Friday (10/6/2011, UK time) morning.

A secondary question I have about RAID 5 - this diskset consists of 5 disks, one of which has failed. If a second disk failed would the RAID 5 configuration still cope, given that the filesystem residing on it is only 9% used? Thanks again.

The internet - allowing those who don't know what they're talking about to have their say.
 
Ken,

I believe in all cases, a RAID 5 mirror can only handle 1 single drive failure. I've lost 2 drives in a RAID 5 mirror and the array was toast.

Thanks,

John
 
Thanks John, I thought perhaps I was living more in hope than expectation! Drive to be replaced this morning, so here's hoping it lasts until then.

The internet - allowing those who don't know what they're talking about to have their say.
 
Hmm. Procedure above made no difference, though it should have. Only option left to maintain data redundancy seems to be to rebuild the RAID 5 and restore to it. Bah!

The internet - allowing those who don't know what they're talking about to have their say.
 
Ken;

Can you provide a metastat output and a metadb -i output please.
 
Hi Chris (I think?), apologies for not getting back to you before now, problems with an M4000 disk this morning too! Anyway, metastat output is:

Code:
# metastat
d0: Mirror
    Submirror 0: d10
      State: Okay
    Submirror 1: d20
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 2055552 blocks

d10: Submirror of d0
    State: Okay
    Size: 2055552 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t0d0s0          0     No    Okay


d20: Submirror of d0
    State: Okay
    Size: 2055552 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t1d0s0          0     No    Okay


d1: Mirror
    Submirror 0: d11
      State: Okay
    Submirror 1: d21
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 3073152 blocks

d11: Submirror of d1
    State: Okay
    Size: 3073152 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t0d0s1          0     No    Okay


d21: Submirror of d1
    State: Okay
    Size: 3073152 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t1d0s1          0     No    Okay


d2: Mirror
    Submirror 0: d12
      State: Okay
    Submirror 1: d22
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 10247232 blocks

d12: Submirror of d2
    State: Okay
    Size: 10247232 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t0d0s4          0     No    Okay


d22: Submirror of d2
    State: Okay
    Size: 10247232 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t1d0s4          0     No    Okay


d3: Mirror
    Submirror 0: d13
      State: Okay
    Submirror 1: d23
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 2055552 blocks

d13: Submirror of d3
    State: Okay
    Size: 2055552 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t0d0s3          0     No    Okay


d23: Submirror of d3
    State: Okay
    Size: 2055552 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t1d0s3          0     No    Okay


d6: Mirror
    Submirror 0: d15
      State: Okay
    Submirror 1: d25
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 143318784 blocks

d15: Submirror of d6
    State: Okay
    Size: 143318784 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t2d0s7          0     No    Okay


d25: Submirror of d6
    State: Okay
    Size: 143318784 blocks
    Stripe 0:
        Device     Start Block  Dbase State        Hot Spare
        c1t3d0s7          0     No    Okay


d7: RAID
    State: Needs Maintenance
    Invoke: metareplace d7 c1t8d0s7 <new device>
    Hot spare pool: hsp001
    Interlace: 32 blocks
    Size: 573264960 blocks
Original device:
    Size: 573273728 blocks
        Device      Start Block  Dbase State        Hot Spare
        c1t4d0s7         330     No    Okay
        c1t5d0s7         330     No    Okay
        c1t8d0s7         660     No    Maintenance
        c1t9d0s7         330     No    Okay
        c1t10d0s7        330     No    Okay

d8: RAID
    State: Okay
    Hot spare pool: hsp002
    Interlace: 130 blocks
    Size: 286627392 blocks
Original device:
    Size: 286634920 blocks
        Device      Start Block  Dbase State        Hot Spare
        c1t11d0s7       1310     No    Okay
        c1t12d0s7       1310     No    Okay
        c1t13d0s7       1310     No    Okay

hsp001: is empty

hsp002: is empty

metadb -i

Code:
metadb -i
      flags           first blk       block count
   a m  p  luo        16              1034            /dev/dsk/c1t1d0s7
   a    p  luo        16              1034            /dev/dsk/c1t2d0s0
   a        u         16              1034            /dev/dsk/c1t3d0s0
   a        u         1050            1034            /dev/dsk/c1t3d0s0
   a    p  luo        16              1034            /dev/dsk/c1t5d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t5d0s0
   a        u         2084            1034            /dev/dsk/c1t3d0s0
   a    p  luo        16              1034            /dev/dsk/c1t9d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t9d0s0
   a    p  luo        16              1034            /dev/dsk/c1t10d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t10d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t2d0s0
   a    p  luo        2084            1034            /dev/dsk/c1t2d0s0
   a    p  luo        16              1034            /dev/dsk/c1t12d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t12d0s0
   a    p  luo        16              1034            /dev/dsk/c1t13d0s0
   a    p  luo        1050            1034            /dev/dsk/c1t13d0s0
 - replica active prior to last mddb configuration change
 - replica is up to date
 - locator for this replica was read successfully
 - replica's location was in /etc/lvm/mddb.cf
 - replica's location was patched in kernel
 - replica is master, this is replica selected as input
 - replica has device write errors
 - replica is active, commits are occurring to this replica
 - replica had problem with master blocks
 - replica had problem with data blocks
 - replica had format problems
 - replica is too small to hold current data base
 - replica had device read errors

Thanks

The internet - allowing those who don't know what they're talking about to have their say.
 
hey Ken and yes it is chris.

I noticed from one of your posts you did metareplace -d d7 c1t8d0s7 (is this a typo?) should be metareplace -e d7 c1t8d0s7

A couple other things I would check.

1) Does format show the drive? Check the last 5 digits of the WWN in the format output for c1t8d0...it should match the wwn on the front of the drive.

2) run ls -al /dev/dsk/c1t8d0s* and make sure the wwn in the path matches the wwn on the drive. If it does not make sure it is not the wwn on the bad drive as that would mean it did not properly remove the drive from /dev/dsk and /dev/rdsk entries.

3) run prtvtoc /dev/dsk/c1t4d0s2 and then
run prtvtoc /dev/dsk/c1t8d0s2 and make sure that they are the same.

If you want you could post the format output, the ls -al /dev/dsk/c1t8d0s*,and prtvtoc commands and I will look at it.
 
Hi Chris, thanks for that. I'll get back to you, but thought I'd better confirm that yes it was metareplace -e I used, not -d. Fat fingers strike again!

The internet - allowing those who don't know what they're talking about to have their say.
 
I've checked out all of the points covered, and all looks in order as far as format etc is concerned. I'm almost convinced that the root cause of this is the dodgy Start Block for the bad disk, though why this should be so I don't know (incidentally we don't have metadevadm available as specified in the Stromberg link above, so perhaps this is why).

It seems that if a disk has been replaced and then needs to be replaced again, then metareplace cannot be used and the device needs to be rebuilt from scratch:

Code:
Using metareplace(1M) on a RAID5 Metadevice May Fail to Run After Disk Failure [ID 1000510.1] 

--------------------------------------------------------------------------------
 
  Modified 05-APR-2011     Type ALERT     Migrated ID 200651     Status PUBLISHED   
Product
Solaris 8 Operating System

Bug Id
SUNBUG: 4633012

Date of Workaround Release
15-NOV-2002

Date of Resolved Release
20-MAR-2003

Impact

Using the metareplace(1M) command may result in the protection offered by RAID5 devices being compromised and further disk failure may result in the loss of redundancy and possible data loss. 


Contributing Factors

This issue can occur in the following releases: 

SPARC Platform 

•Solstice DiskSuite 4.2.1 (for Solaris 8) without patch 108693-15 
This issue can only occur if the following are both true: 

•RAID5 metadevices are configured 
•The system has Fibre Channel Arbitrated Loop (FCAL) connected disk drives, for example the V880 
Notes: Earlier releases of Solstice DiskSuite are not impacted by this issue. 

With the release of Solaris 9, Solstice DiskSuite became known as Solaris Volume Manager (SVM) and is bundled with Solaris 9. 


Symptoms

When a disk fails and is replaced, the start block of the RAID5 stripe on that disk will have been changed: 

	d6: RAID
		State: Okay
		Interlace: 64 blocks
		Size: 716706144 blocks
	Original device:
		Size: 716709120 blocks
	Device              Start Block  Dbase State        Hot Spare
	c0t0d0s2                5438     No    Okay
	c0t1d0s2                5438     No    Okay
	c0t2d0s2                5438     No    Okay
	c0t3d0s2                5438     No    Okay
	c0t4d0s2                5438     No    Okay
	c0t5d0s2                5438     No    Okay
	c1t16d0s2               6088     No    Okay
	c1t17d0s2               5438     No    Okay
	c1t18d0s2               5438     No    Okay
	c1t19d0s2               5438     No    Okay
	c1t20d0s2               5438     No    OkayThe above output shows that c1t16d0s2 has been metareplace'd in the past (and its start block has moved). 

If this disk fails again then it cannot be replaced with a new drive. The "metareplace" command will return: 

	 metareplace -e d6 c1t16d0s2
metareplace: beast: c1t16d0s2: No space left on deviceand the functionality of data protection offered by the RAID5 device will have been compromised and a further disk failure will result in the loss of accessibility to data via "metareplace". 


Workaround

Backup the data and metaclear(1M) the RAID5 device, replace the disk and metainit(1M) the RAID5 device and restore the data. 


Resolution

This issue is addressed in the following releases: 

SPARC Platform 

•Solstice DiskSuite 4.2.1 (for Solaris 8) with patch 108693-15 or later

I have installed the latest patch as suggested above, but I don't think this helps retrospectively.




The internet - allowing those who don't know what they're talking about to have their say.
 
wow good find never ran into this before.Well I figured you had covered most of your bases but never hurts to have someone double check things.

Sounds like you are going to need to blow it away and rebuild...boooo.



chris
 
Thanks Chris - have a star for the advice and your persistence! I'll post back when I've done the rebuild, but I'm seriously considering replacing the RAID 5 with 2 mirrors and a concatenated volume, thus doubling the redundancy we have and enabling me to add a disk to as a hot spare.

The internet - allowing those who don't know what they're talking about to have their say.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top