Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

lspv -u

Status
Not open for further replies.

w5000

Technical User
Nov 24, 2010
223
PL
how could I guess on client lpar side what type of storage (real storage box/type) is mapped to it from vios? I have no access to vios.

should client lpar using such VSCSI disks have installed any host attachment kit so there are valid ODM entries and I can see proper disk type on lsdev? Now i see just Virtual SCSI Disk Drive - even disks are not tuned well (default queue_depth is there 3) due to missing ODM's rules for that IBM disks. Yes, I guess these are some IBM's as lspv -u shows UUID strings with "FAStT03IBMfcp05VDASD03AIXvscsi
 
The vscsi disks are native to AIX, no HAS required.
The queue depth of 3 is default.
DON'T mess with the attributes if you cannot make similar changes on the VIO or you may lose the disk access...
 
hmmm,

are you sure that queue_depth=3 on VSCSI disks will not be botle neck if the real storage mapped on VIOS has set queue_depth=20?
 
Depends on how many VTDs are mapped to the adapter.
Queue depth of 3 on a vscsi client disk gives 85 luns per adapter with dual VIO, and a couple left over for the vsci frame work - vscsi limit is 512 - dual vio, 85 luns, three queues per, you do the maths...
Less luns will let you have more queuedepth, more luns / queue depth will prevent them from configuring properly, your choice ;)
 
but isn't queue_depth related with storage vendor? with default 3 I observe huge sqfulls on iostat.

besides, on page there is an statement:

"With vSCSI and NPIV, it’s important to ensure that all disks are set with reserve_policy=no_reserve. With vSCSI, you should also check queue_depth on the VIO server for each hdisk and on the client as well. The client will most likely default to the SCSI value of 3 and you may need to increase this. Don’t make it higher than whatever it is on the VIO server."

so on both, VIOS and client LPAR to get better i/o and performance benefits of modern storage solutions

update: in IBM redbook "IBM PowerVM Virtualization Introduction and Configuration" SG24-7940-05 there a statement:

"The queue depth value for each disk using MPIO on the client partition, which
determines how many requests the disk head driver will queue to the virtual
SCSI client driver at any one time, must be configured to match the queue depth
value used for the physical disk on the Virtual I/O Server. It must be changed
using the chdev command as shown in the following example:
chdev -l hdisk0 -a queue_depth=20 -P
 
Yeah, and with all that, what do you propose?
Bump it to a queue depth of 20 for each lun, with 50 luns on a vscsi adapter - and then sit and watch as they don't configure...

Sorry, not sure I get your point.

If you want max transfer, bump the queue depth, just make sure you don't max the adapter - configure more adapters / less luns per adapter as you increase the queue depth.

You'll have to try it to be sure how it works out...
 
I have less than 5 LUNS per NPIV client
 
IBM suggest to increase queue_depth in case iostats shows high sqfull valuse - this is what I observe here

there is never problem with queue_depth on physical servers (VIOS) as corresponding drivers/software change AIX defaults in ODM (or adds customization for specific disks types)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top