This is from memory, but I did this stuff both DiskSuite and Veritas all the time.
First, take the output of:
metastat > metastat.out
metastat -p > metastat.p.out
metadb > metadb.out
echo | format > format.out
Suppose c0t0d0 is still good and c0t1d0s0 is the bad one.
d0 is the root mirror, d10 the good submirror on c0t0d0 and d20 the bad submirror on c0t1d0, d1 the good swap mirror, d11 good submirror, d21 bad submirror, etc.
1. metadb to remove the metadbs from the failing disk, c0t1d0
(used metadb -d) Hopefully you have multiple copies.
2. metadetach each of the submirrors on the failing disk.
(metadetach d0 d20; metadetach d1 d21;...)
Now nothing should think it should write to the disk, I believe the disk is hot swappable, you can yank it out
and insert the new drive.
3. format, replicate the partition table from the good disk to the new disk. This will only work if the disks are the same geometry, else you might have to twiddle it.
5. metadb to add the metadbs to the new disk. You can use the same params that are spit out in metastat -p, I always use the -c option to create multi copies:
(if s6 was where your metadbs were created, it would be
metadb -a -c 3 /dev/rdsk/c0t1d0s6)
6. metattach each of the submirrors on the new disk
metattach d0 d20
metattach d1 d21
etc.
I believe at this point you can metastat and see
the drives syncing. Don't reboot until they all finish, but a reboot is unnecessary.
Some people metaclear after step 2 and re metainit the devices after step 5. While this doesn't hurt and you have saved the metainit parameters in metastat.p.out and the partition mapping hasn't changed, this (I believe) is unnecessary as I remember.
Good post Eugene, however I think the metadetach/metattach steps are unnecessary since the disk has failed, and presumably is receiving no I/Os. Once the new disk is installed and partitioned, metareplace -e on each of the failed submirrors should do the trick shouldn't it?
I worked at Sun for about a year, doing hardware and software fixing. And I don't think I ever typed in metareplace. The reason was simple.
In a 24x7. world wide environment where machines might not be locally handy to swap the disk, having the bad disk spit out on monitoring software or being called out as failed it is better to detach the mirror and run "clean" without the mirror until the FE or you can physically swap the drive.
So yes, you are right, I think if you aren't under those constraints metareplace will also work just fine.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.