Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Drop file systems 1

Status
Not open for further replies.

dklloyd

MIS
Mar 9, 2001
78
0
0
GB
More on disks....
Having established that I'm going to have backup all the filesystems in a volume group prior to replacing the disks (cos there's no spare slots for the new larger disks), is there a quick way of dropping the filesystems/logical volumes/volume group & RAID0 without having to go through every one in turn at each level and remove them? I want to re-create the logical volume groups differently so I don't want to preserve their current settings.

I will of course have to restore this lot onto the new disks once they're installed.

Also Under RAID 0 configuration, the stripe size can be set up to a max of 64K. In logical volume manager the max value for stripe size is 128K, should I set both to the same value (ie 64K) or ignore the setting in volume manager?

I'm going to use LVM to make copies of the LV to a 2nd RAID0 set with the same volume group. Should I specify both number of copies (2) and mirror consistency (Yes) or are these mutually exclusive? What are the best options for this?

Many thanks for any help on this.
 
number of copies and mirror write consistency are not mutually exclusive.

number of copies is self-explanatory

mirror write consistency is a special log that's used to help make sure that all copies of a write to a mirrored block (within an LV) are consistent. fyi, this log is located on the outer edge of a platter, and is not relocatable. keep that in mind when setting the location policy for your mirrored LVs - you can minimize thrashing between data and that log by locating the LVs on the outer edge as well.
 
Many thanks Chapter11...
If I'm using RAID 0 then does specifying the outer-edge for a LV actually spread the LV on the outer-edge of all the physical disks in the RAID set? Also I had been told that the outer-edge is considered the slowest access area, is this true?

dklloyd
 
If you're using all IBM certified hardware, then it should all obey allocation policy.

The school of thought on what's the fastest area of a disk that *I* was taught is that the middle of the platter is usually the fastest, followed by the outer-middle. The issue is not so much rotational time (at over 100 revolutions per second it shouldn't be) as the actuater's track-to-track access time. The actuator typically idles at the middle of the drive so it has the best average response time to any track, this is why the "middle" is usually the fastest.

In the case of the MWC log being forced to the outer edge, writing to mirrored LVs will result in the actuator spending a lot of time on the outer edge, moving between that and whereever the data is located. By moving the mirrored data to the outer edge, you minimize the amount of actuator movement.

Hard drives are usually the bottleneck in a machine, because they are mechanical. Get into a hard drive, and the actuator is the bottleneck of *that* machine, because it's the actual mechanical piece.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top