Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

LUN Sizes

Status
Not open for further replies.

eskolnik

Technical User
Sep 4, 2002
117
0
0
US
We are getting conflicting recommendations from HP on selecting LUN sizes for our HP UX 11 boxes. The Storage Works EVA formal documentation is suggesting using a single large virtual LUN and the EVA HP installation folks are say we should break a request for storage into multiple 30 – 40 GB virtual disks.

I was wondering if anyone out there has some practical experience one way or the other. And I also very interested in how you came up with the recommendation.

I have looked at the scsictl command to see if maybe change the Queue Depth from the default of 8 to something higher for large LUN’s , Turning off Queue Depth flow control if that is possible on HP UX 11+ (I know I can do it on AIX for a ESS solution).


Ed Skolnik
 
I work with the EVA side of this on an almost daily basis, but I have very limited knowledge of HP-UX. Those things considered, from what I know the way to gain performance traditionally was to create a LUN in the host by requesting several storage LUNs and concate them together using the host OS volume manager, in this case LSM (?). Doing this will spread the I/O load over many spindles and thusly greater performance can be achived.
However, with the EVA, this is already done inside the controllers. One of the main, of not _the_ main point of the EVA is the virtualizatin where all LUNs are spread out over all physical disks. (Unless you divide it up into several disk groups, which is normally not recommended).

By this approach, it is no longer necessary to request several smaller LUNs from the storage and concate them in the OS. This will of course lead to easier administration and probably better performance still since the OS doesn't have to do any volume management.
/charles
 
Charles,
I do agreed with your statement to a point, but the OS's can only Q so many I/O's to a single device. If the device (virtual LUN) is fast enought then this really isn't a issue, but at this point I don't know if the OS and HBA is fat enouht and what would be the breaking pint.

Ed

Ed Skolnik
 
I don't know the EVA internally, but you say you have HP UX 11 boxes. If your create only one LUN, how will you share that LUN? at least you should need 11 LUNs, and the size of that LUNs will depend on the capacity needed per server basis.

Hope this helps.
 
NO I have HP UX Release 11.... Not 11 machines



Ed Skolnik
 
Lets try an example

The "user" needs 200 GB of space lets say.

One thought process is to create a single 200 GB LUN and present it to the HPUX box.

Another thought process is to create ten (10) 20 GB LUN's and present them to the HPUX box.

Now we all know that an OS's can only Q so many I/O's to a single device. If the device (virtual LUN) is fast enought then this really isn't a issue, but at this point I don't know if the OS and HBA is fast enought and what would be the breaking point.

ed


Ed Skolnik
 
so my post was stupid... forget it then.

Anyway, I know CLARiiON and with this one we use to create different raid groups and differents LUNs, according to the type of data which will be stored, example: with Oracle we separate DB LUNs, Index LUNs, Redolog luns and so on.

Hope this helps.
 
Charles,
Thanks but i am looking at "Performance " type issues, not logiacl grouping of data. But to your remark do you created more than one DB LUN or just a larger one and if so why?

Ed

Ed Skolnik
 
I'm not Charles, but I guess the question is for me...

When I say DB LUN I mean the "data" (or tables) and *only data*, and ussualy only one larger LUN, spread into several disk drives (spindles). Then we put the "index" (database index) on another one large LUN, and so on with redos and archiving..

We make this separation because the type of access to data, index, redo and archiving are very differents: index and data: could have ussualy 80%read 20%write and redo and archiving 100%writes. This kind of mix could kill the cache algoritm if all of this data (data and redo for example) are in the same LUN, because 100%write of redos will let nothing of cache for your OLTP access of data and index.

Ussualy, cache memory could be assigned per LUN base or configure each lun to use or not the cache memory.. so redos and archiving could have disabled the cache (read and write) and trust in the disk drives performance and data and index could have cache enabled (R/W). You see? you can have more control in order to get better performance.

Hope this helps and sorry for my english (I'm not a native english speaker).

Cheers.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top