Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Optimal stripe configuration.

Status
Not open for further replies.

shmick

MIS
Nov 14, 2007
11
CA
I've got 22 hdisks that I'm going to create a striped volume from, but I'd like to make sure that my striping config is going to be optimal.

The situation I'm dealing with is that the 22 disk really only come from 11 unique pools of disks from the backend storage system. So the question I have is this:

Due to the fact that there are only 11 unique disk pools, should I see better performance by making a logical volume by creating a striped volume using the first 11 disks and then extend the volume using the next 11 disks? My theory behind this strategy is that the data on the first 11 disk stripe will be different from the data on the second 11 disk stripe.

Am I on the right track here or am I just over complicating my situation?

Thanks.

-- Steve
 
Could you please elaborate more on your case! I'm not sure if i understood what you mean! Are these SAN disks?

Regards,
Khalid
 
Yes, these are SAN disks. The SAN disk groups are made up of 8 146GB disks in a group and then these groups have LUNS created from them. So what I have is 11 groups of 8 disks each and from those 11 groups, I have 2 disks from each group.

My idea was that I would create a striped volume with the first disk from the 11 groups, allowing me to isolate that set of striped data to those disks. I would then extend the volume by the next 11 disks, creating another stripe column.

What I'm try to do is create a stripe with all 22 disks, but only have the stripe be 11 disks wide so that the stripe would look like:

1A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11A
1B 2B 3B 4B 5B 6B 7B 8B 9B 10B 11B

this would equate to:

HDISK2 HDISIK3 HDISK4 HDISK5 HDISK6 HDISK7 HDISK8 HDISK9 HDISK10 HDISK11 HDISK12
HDISK13 HDISK14 HDISK15 HDISK16 HDISK17 HDISK18 HDISK19 HDISK20 HDISK21 HDISK22
 
Ok so what i understood from the above that you have 8 disks in 11 groups (total of 88 disks) and you want to creat strips out of them!

The way it should be that you first create an array out of your unconfigured capacity. While creating the array, you will be asked whether you want to strip your disks (RAIDing) and then it will ask you for the size of the LUNs! After finishing from the array, you go to mappings view and you create your groups of LUNs!!!

I suggest that when you create your array, you specify RAID 10 which is striping and mirroring of your disks! or you can use RAID 0 for striping only but it won't give you any redundancy ofcourse! Then from there you go ahead and create your 11 LUNs which will already be striped.

Regards,
Khalid
 
It depends on the load on the disk groups from other systems and the effectiveness of the SAN server's cache and caching algorithm, but I doubt you'd see a whole lot of difference in performance between striped, non-striped, striped across the 8 disk groups, ...


HTH,

p5wizard
 
Of course you'll see better performance if you are spreading the I/O out across multiple spindles. A single spindle is only capable of X amount of I/O compared to the sum of multiple spindles. That's also not what's been questioned here.

What I have is 22 hdisks _total_ presented from the SAN. Disks hdisk2-12 all come from different spindle groups but hdisk13-23 share the same backend spindle groups as hdisk2-12.

So this means I have 22 disks sharing 11 spindle groups.

Now that I've cleared the disks / spindle groups math up what I'm trying to do is control the I/O to the spindles via my stripe layout. Rather than having the stripe write across all 22 disks, which would effectively be hitting the same spindle groups twice each time, I'd like to have the stripe only hit 11 disks at a time.

So, back to the original question.

If I do a 'mklv -u 11' and then a 'extendlv -u 11' will this effectively create 2 stripe columns?

 
That's just it, they're not spindles, they're hdisks backed by LUNs which are already RAID-5'd across multiple disks inside the SAN server...

I've never had to worry about striped LVs and their setup in a SAN environment (and probably will never have to).

On writing, you'll write to the SAN server's write cache, end of story for your application and for AIX. It then becomes the SAN server's problem to write the data out to the disk or disk groups. On reading, the SAN server first needs to read from the disk or disk groups into the read cache before the data is sent back to your server. So it is mostly the size of the cache in the SAN server and effectiveness of its caching algorithm and the cache-friendliness of your application which will determine the overall SAN performance...


Back to your question though: Out of what I've read, I'd specify the physical volumes in the mklv command in the order you would want the stripes spread across them.



HTH,

p5wizard
 
What P5 is saying is correct and BTW 999 times out of a 1000 he's spot on with his answers; in the SAN you will have either 5* 36 / 72 or 146 Gb disks which are (for a better description) lumped together into a LUN and then carved up and handed out. I take it you only have one SAN and are not mirroring across SAN's, which could improve write times if you load balance.

What SAN is it? Maybe worth asking the supplier if they have any best practice guidelines for you to reference.

Mike

"Whenever I dwell for any length of time on my own shortcomings, they gradually begin to seem mild, harmless, rather engaging little things, not at all like the staring defects in other people's characters."
 
[off topic]Thx for the vote of confidence Mike ;-)[/ot]
 
Yes, I'm well aware of how SANs work. SAN is not new to me. I've been managing SAN environments for nearly 10 years. LVM within AIX is new to me. I'm used to doing things with Solaris + Veritas VxVM.

The HP XP12000 (rebranded Hitachi TagmaStore) we have doing raid 5 (7+1) with sets of 8 146GB disks in each parity group.

So in VxVM I could say "here's 22 disks but only make the stripe 11 disks wide because I don't want to double hammer the back end devices" This is what I was trying to accomplish with AIX.
 
So, If you would have built 11 LUNs of size 20GB across the 11 disk groups instead of 22 LUNs of size 10GB across the same number of disk groups, what would be the difference?
When you have filled the LV (or filesystem on it) with data and there is a random IO pattern, I don't think it would matter: you (or more correct: the SAN server) WILL be hammering the backend disks for 20GB. (example sizes)

Striping only makes a difference IMHO if you have the disks attached locally to the server, when you really have to worry about throughput/bandwidth to the disks.

But you are free to choose which way to go - I rest my case.

myself said:
Back to your question though: Out of what I've read, I'd specify the physical volumes in the mklv command in the order you would want the stripes spread across them.

HTH,

p5wizard
 
Normally I would also give p5wizaard a vote of confidence, because I am always amazed at the accuracy of his replies.
However when you use a stripe volume on a san you are double striping which normally degrades write performance, best result have shown usually using a spread filesystem will provide better performance than stripes.
Normally I would say go back and get 1 lun for each array and then spread across that.

p.s. a generalized info
8 disks sustained write at 10mb/s = 80mb/s
16 disks sustained write at 10mb/s = 160mb/s
1 1GB HBA = 160 MB/s
You need 2 2GB HBA to try to get 620MB/s but depending on disk
and overhead you may never get that.

Tony ... aka chgwhat

When in doubt,,, Power out...
 
As an aside, did you manage to get MPIO setup ok or are you using HDLM?

This may help

faq52-6388




Mike

"Whenever I dwell for any length of time on my own shortcomings, they gradually begin to seem mild, harmless, rather engaging little things, not at all like the staring defects in other people's characters."
 
Thanks Mike. We use the HP supplied ODM file (currently XPARRAY_MPIO_ODM_5400I) which allows the devices to be correctly detected when cfgmgr is run.

hdisk20 Available 04-00-02 XP MPIO Disk XP12000 (Fibre)

The only thing that needs to be done after that is to set round robin with:

chdev -a reserve_policy=no_reserve -a algorithm=round_robin -l hdiskX

I really wish there was a way to have reserve_policy=no_reserve and algorithm=round_robin set by default.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top