Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Almost 100% kernel CPU use with 3 dd commands running

Status
Not open for further replies.

Chacalinc

Vendor
Sep 2, 2003
2,043
US
Hi,

We are doing some performance tests over an AIX5.3 (P570 server with 8 CPUs and 27GB RAM, 4 FC 4Gbps HBAs). the problem is when we issue 3 concurrent dd commands, the kernel CPU (nmon used to see it) goes almost 100%. the storage is a USP-V from Hitachi using HDLM 5.9.1.

Any ideas what could be happening?

Thanks in advance!

Chacal, Inc.[wavey]
 

What version of nmon and can you show the output incl. memory, disk, proc, etc?
 
how can I get the output of nmon?

thanks!

Chacal, Inc.[wavey]
 
hmm.. I read the following:

The default queue depth for an AIX system that does not have the ODM update is 1. This allows only one I/O to be "outstanding" and places a bottleneck at the host. A queue depth of 1 should only be used on a port where there are more than 128 LUNs mapped (e.g. a host that backs up 128 -> 256 shadow image volumes).

For most customers a queue depth of 2 is optimal as it allows them to grow the number of LUN's per port to 128 (at which time the bandwidth of a 1 Gbps / 2 Gbps port would likely be insufficient). Queue depth of 2 proves optimal for most customer workloads.

Increasing the queue depth may in fact decrease performance because of increased CPU time spent managing the I/O queue. When the queue depth is set higher the host must keep track of each outstanding I/O.

A queue depth of 4 may often be appropriate in a database environment. Especially if there are multiple database instances sharing a common table space. In this instance they are limited to 64 LUNs per port. They will likely see a small increase in CPU load attributable to the queue depth increase. However you must not set the queue depth to a higher number on only one host (or one LUN) you need to make sure that the queue depth for all LUNs mapped to a port is the same.

A queue depth of 8 may also be appropriate for some workloads. Queue depths of greater than 8 will not improve performance and will degrade performance.

the queue_depth was in 8, now it is in 2 and the CPU went down, but slowering the performance (from 510 to 430MB/s). So I think I gonna leave the queue depth in 4.

Thanks anyway!

Chacal, Inc.[wavey]
 
khalidaa said:
Changing the queue depth is the last thing i would recommend you to do! have you tried the LTG for the disks? you might not be running on the optimal LTG!

hmm.. that's a good question. I didn't know about this since I'm not an AIX guy, I'm the storage one.

The test are done with "dd" command, using blocks size of 128K and 256K; both use almost the same time, but the CPU went to almost 100% (with a queue depth of 8). Do you think that adjusting the LTG to 128L or 256K (depending on the block size for the test) the CPU should go down?

Just to keep in mind. Using the "dd" command is just for test. Finally, this server going to have an Oracle database for OLTP (during the day) and Datawarehouse in the night.

What would be your recommendation for both environment? (test and then Oracle).

Thank you very much for you information, it's very good!

Chacal, Inc.[wavey]
 
Chacal,

There are plenty of tuning you can do for the file systems and the LVs (like increasing no. of queues to be used for writes/reads, Direct I/O, Sequential read ahead and sequential/random write ahead, maybe load balancing the data using stiping and others). All of this depends on your application and how you would like it to respond for certain processes.

I recommend that you first look into the LTG size to be able to get the maximum read/write per transaction to the disks and then look for the lv and filesystem queues!

As indicated in the link above, have you tried this command?

/usr/sbin/lquerypv -M hdiskX

get the output of this command and compare it with the LTG size of your volume group

lsvg vgname

You will get an attribute called "LTG size". If this is not the same as the output of the above command (lquerypv) then you have to change it to be similar to it!

You can change this using chvg -L (if i remember the command correctly) and the volume group has to be varyed off for this to be activated.

Have a look into this for database tuning:


Please let me know the results once you do that.

Regards,
Khalid
 
[tt]root [server]% /usr/sbin/lquerypv -M hdisk146

256



root [server]% lsvg vgtest02

VOLUME GROUP: vgtest02 VG IDENTIFIER: 00c6045c00004c000000

01175a580f4d

VG STATE: active PP SIZE: 128 megabyte(s)

VG PERMISSION: read/write TOTAL PPs: 4122 (527616 megabyt

es)

MAX LVs: 256 FREE PPs: 0 (0 megabytes)

LVs: 2 USED PPs: 4122 (527616 megabyt

es)

OPEN LVs: 2 QUORUM: 2

TOTAL PVs: 1 VG DESCRIPTORS: 2

STALE PVs: 0 STALE PPs: 0

ACTIVE PVs: 1 AUTO ON: yes

MAX PPs per VG: 30480

MAX PPs per PV: 5080 MAX PVs: 6

LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no

HOT SPARE: no BB POLICY: relocatable
[/tt]

according to the commands, they are aligned.

BTW, very good reading! thanks for that document.

I guess that actually there is not a problem, may be the disk array is faster enough in order to satisfy the IO faster than the CPU expect! I mean, with iostat, the service time for 128K or 256K read or write is as high as 0,9ms; wait is 0ms.

cheers.

Chacal, Inc.[wavey]
 
Try vmstat -v before and after your tests and monitor the changes in:

pending disk I/Os blocked with no pbuf
paging space I/Os blocked with no psbuf
filesystem I/Os blocked with no fsbuf
client filesystem I/Os blocked with no fsbuf
external pager filesystem I/Os blocked with no fsbuf

This will give you an idea whether you need to change the lv or fs queue buffers!

Regards,
Khalid
 
Thanks Khalid for your post. I gonna do it and I'll post you the results.

Thanks again for your support!

Regards,

Chacal, Inc.[wavey]
 
I am not sure this is a problem. AIX will apportion the cpu based on the workload. If you only have a couple of jobs - those jobs will get the cpu time.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top