Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

A5100 failing disk w/ disksuite and raid 5

Status
Not open for further replies.

32333233

Technical User
Nov 19, 2004
2
0
0
US
Hello All,

Just wondering if anyone can help me with my issue!!
I have a A5100 w/DiskSuite/Raid 5. I need to replace this disk "Hot"..does anyone know the "Step-by-step" procedure on doing this???

My Assumption is using the LuxAdm Commands?

luxadm remove_device enclosurename, [F][R] slot#
luxadm insert_device enclosurename, [F] [R] slot#

BUT....I am not 100% on this..any feedback is greatly appreciated!!


Cperez
 
Before beginning I would check for metadevices (metastat) and metadb replicas (metadb) on the device. I would remove any metadb replicas on the drive from the config. Your procedure seems right so far, luxadm tells you when to remove or insert drives. luxadm remove_device often fails if the drive is in status "Bypassed AB", check with "luxadm display <enclosurename>". In this case you could try to shutdown the drive via front panel, pull it, plug it back in and spin it up. Sometimes it reconnects and you can succesfully run "luxadm remove_device ..." (which does not only spin down the drive but also cares for your device tree). Another option is "luxadm remove_device -F ...", check the man page. A last resort is to insert the replacement drive and run "luxadm insert_device ...", "luxadm remove_device ..." and again "luxadm insert_device ..." (service technicians sometimes used this procedure here). In a cluster environment you have to repeat the standard procedure on any connected node. Then partition the new drive and run any "metareplace -e <metadevice> <device>" to resync your SDS raids.
 
... and double check the slot number (luxadm display <enclosure_name>) before you begin ... I think it starts with 0, not 1.
 
This is the problem........ we cannot place the hot spare back in the "spare pool".....???
So what's the next step...I am stuck in the mud!!!

services# metastat
d30: RAID
State: Okay
Hot spare pool: hsp001
Interlace: 128 blocks
Size: 711112905 blocks
Original device:
Size: 711114240 blocks
Device Start Block Dbase State Hot Spare
c8t64d0s6 1290 No Okay
c8t65d0s6 1290 No Okay c8t86d0s6
c8t66d0s6 1290 No Okay
c8t67d0s6 1290 No Okay
c8t68d0s6 1290 No Okay
c8t69d0s6 1290 No Okay
c8t70d0s6 1290 No Okay
c8t80d0s6 1290 No Okay
c8t81d0s6 1290 No Okay
c8t82d0s6 1290 No Okay
c8t83d0s6 1290 No Okay

d10: Concat/Stripe
Size: 426673521 blocks
Stripe 0: (interlace: 128 blocks)
Device Start Block Dbase
c0t3d0s6 0 No
c2t0d0s6 0 No
c2t1d0s6 0 No
c2t2d0s6 0 No
c2t3d0s6 0 No
c7t0d0s6 0 No

hsp001: 3 hot spares
c8t86d0s6 In use 71112735 blocks
c8t85d0s6 Available 71112735 blocks
c8t84d0s6 Available 71112735 blocks
 
Did you already run [tt]metareplace -e d30 c8t65d0s6[/tt] ?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top