Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

High IO Wait!!! 2

Status
Not open for further replies.

khalidaaa

Technical User
Jan 19, 2006
2,323
BH
Hi Guros,

I'm having a problem lately with one of our servers that shows a constant high IO wait as shown below:

Code:
# vmstat 1
System Configuration: lcpu=1 mem=2048MB
kthr     memory             page              faults        cpu     
----- ----------- ------------------------ ------------ -----------
 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa 
 1  2 409176 123834   0   0   0 341 1157   0 1203 85628 3738 21 20 52  7
 0  1 409180 123830   0   0   0 2613 45774   0 615 4536 3885 19  8  0 73
 0  6 409180 123831   0   0   0 2692 53878   0 602 4175 3941 12  5  0 83
 1  0 408936 124073   0   0   0 1505 36609   0 557 3655 2345 13  7  0 80
 1  0 408936 124078   0   0   0 238 5456   0 421 14732 2843 48  3  0 49
 1  0 408936 124071   0   0   0 131 1344   0 451 6569 3320 36  3  0 61
 0  1 409180 123834   0   0   2  88 2376   0 500 8147 5644 32  9  0 59
 0  1 409180 123834   0   0   0   0    0   0 426 4212 3315 25  7  0 68
 0  1 409180 123834   0   0   0   0    0   0 450 5914 6163 15  7  0 78
 3  0 409191 123819   0   0   0  72 1633   0 500 6695 2504 27  5  0 68
 1  1 409191 123824   0   0   0   9  286   0 382 16289 4880 62  7  0 31
 0  1 409191 123824   0   0   0   0    0   0 398 5731 5404 46  3  0 51
 2  1 409191 123824   0   0   0   0    0   0 440 4135 3389 15  4  0 81
 0  1 409191 123823   0   0   0   0    0   0 480 6328 6123 27  4  0 69
 0  2 409216 123799   0   0   0 185 2276   0 771 3938 2184 11  2  0 87
 1  1 409216 123799   0   0   0   0    0   0 668 18927 5324 78 14  1  7
 0  1 409219 123796   0   0   0   0    0   0 521 4202 3293 19  3  0 78
 0  1 408975 124041   0   0   0   9   17   0 435 4956 4699 21  4  0 75
 0  1 408975 124035   0   0   0  64  535   0 641 5219 4591 20  9  0 71
 0  5 408975 124035   0   0   0   9   32   0 533 6426 5376 19  6  0 75
kthr     memory             page              faults        cpu     
----- ----------- ------------------------ ------------ -----------
 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa 
 1  0 408975 124039   0   0   0  10   40   0 319 18123 3911 69 10  0 21
 0  2 408975 124041   0   0   0   9  334   0 497 5316 4539 28  4  0 68
 0  1 408975 124039   0   0   0   0    0   0 496 6276 5050 38  4  0 58
 0  1 408996 124014   0   0   0  60 1700   0 635 5517 4588 14  5  0 81
 0  1 409021 123995   0   0   0  55 1199   0 620 6931 4886 27  7  0 66
 1  1 409043 123967   0   0   0  53  527   0 579 19974 4732 76  7  0 17
 3  0 409066 123948   0   0   0  90 2472   0 621 6163 4192 40  6  0 54
 0  1 409106 123904   0   0   0 204 5459   0 716 5472 3855 23  4  0 73
 0  1 408866 124145   0   0   0 144 2158   0 723 7554 6014 33  9  0 58
 0  1 408871 124141   0   0   0  71  873   0 1533 7859 4898  9 11  0 80
 0  1 408876 124135   0   0   0 138 3002   0 1529 22091 6251 54  8  0 38
 0  2 408876 124134   0   0   0 603 13920   0 745 12588 4285 54  6  0 40
 0  2 408881 124127   0   0   0  18  141   0 627 7805 5727 37  8  0 56
 1  0 408881 124133   0   0   0   9   46   0 744 5075 3707 14  8  0 78
 2  1 408902 124106   0   0   0  28   65   0 915 7763 6668 29  6  0 65
 1  1 408912 124096   0   0   0   0    0   0 408 9884 3023 37  6  0 57
 0  1 408912 124096   0   0   0   0    0   0 365 12311 4133 61  5  0 34

Code:
# iostat hdisk0 1

System configuration: lcpu=1 disk=12

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0         17.1              21.0     19.6       52.3       7.1     
                " Disk history since boot not available. "


tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        247.5              28.7      5.9        0.0      65.3     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          99.0     1239.6     145.5          0      1252

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        1194.9              20.2     12.1        0.0      67.7     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0         100.0     1454.5     136.4          0      1440

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        292.0              33.0      6.0        0.0      61.0  

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          90.0     1673.0     143.0          0      1673

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        1014.0              13.0      3.0        0.0      84.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          99.0     1416.0     139.0          0      1416

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        292.0               9.0      4.0        0.0      87.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          98.0     1041.0     151.0          0      1041

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        892.0              15.0      3.0        0.0      82.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          97.0     1444.0     137.0          0      1444

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        288.1              11.9      7.9        1.0      79.2     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          98.0     1378.2     145.5          0      1392

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        733.0               8.0      3.0        0.0      89.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0         100.0     1377.0     134.0          4      1373

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        291.0              16.0      4.0        0.0      80.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          99.0     1493.0     135.0          0      1493

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        792.0              12.0     28.0        0.0      60.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          98.0     1369.0     130.0          0      1369

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        291.0              13.0      4.0        0.0      83.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0         100.0     1556.0     130.0          0      1556

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        809.0               6.0      3.0        0.0      91.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          99.0     1460.0     126.0          0      1460

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        291.0              43.0      5.0        0.0      52.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          87.0     1201.0     144.0         12      1189

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        865.0              39.0      5.0        0.0      56.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          96.0     982.0     117.0          4       978

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        290.0              10.0      4.0        0.0      86.0     

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          99.0     1481.0     132.0          0      1481

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.0        768.0              20.0      2.0        0.0      78.0

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0          94.0     1282.0     130.0          0      1282

I beleive that one of the lv's on that disk was causing the problem! i shifted this lv (with migratepv) to another disk and now i'm having the same problem with that disk! The lv contains a database data file.

one more thing that scares me is this process:

Code:
# ps aux | more
USER       PID %CPU %MEM   SZ  RSS    TTY STAT    STIME  TIME COMMAND
root       516 60.1  0.0   16   16      - A      Oct 27 27449:34 wait

Any help is appreciated

Thanks

Regards,
Khalid
 
Try using filemon to dig a little deeper.

Mike

"Whenever I dwell for any length of time on my own shortcomings, they gradually begin to seem mild, harmless, rather engaging little things, not at all like the staring defects in other people's characters."
 
khalidaaa,

Don't worry about the 'wait' process. They (you probably have more than one) just eat processor cycles that aren't needed by any running processes. On a database server, this is going to be a pretty good chunk of cpu time, since disk is the limiting factor for performance.

As to your IO problem if it's a single database file on the system, you should try to get it spread across more physical drives. You can either spread or stripe the lv, or there may be a method internal to the database software for breaking the file into parts, which you can then place on different drives.

- Rod


IBM Certified Advanced Technical Expert pSeries and AIX 5L
CompTIA Linux+
CompTIA Security+

Wish you could view posts with a fixed font? Got Firefox & Greasemonkey? Give yourself the option.
 
Looks like it is because of hdisk0. If you can't use filemon because the perf fileset isn't installed, you can use lvmstat and identify the lv that is getting hit constantly and then figure out from there what you want to do.

What is the server being used for? What does 'vmstat -I' show?
 
Thanks guys for your replies.

Today i'm on a vacation unfortunatly so i won't be able to check the above suggestions!

but i used lvmstat previously but to be honest i couldn't translate its output! put i know which of the LVs that is creating the problem!

I was having the oracle application with the database on the same disk which was hdisk1! (plus there was one more database listed with them) so i concluded that i will have to move those database files to a seperate disk! unforetunatly all of what i'm saying right now is located in one vg (rootvg) because this LPAR doesn't have access to our SAN and its is using the local (2-disks) of the expansion unit on our p5570 machine!

so for our hdisk0 is having the two (heavily used) LVs (/ora1/data/ & /ora2/data/)

I know that the "wait" process appeares in the output of the ps aux to show that when the processor is idle it goes into wait but i still don't see why should it be there while the processor is not idle by that time!!!

Regards,
Khalid
 
Hi

Are your Oracle LV's providing filesystems, or are they raw LV's? If they are providing filesystems, it might be worth your while to look at "fileplace" - if the files are heavily fragmented then that would cause more disk activity than would be necessary if the files were contiguous. If they are raw LV's, then I pass. With Informix & raw LV's it is possible to defragment tablespaces by unloading and reloading tables with new extent settings.

HTH



Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
khalidaaa,

"iowait" in iostat and "wa" in vmstat really mean that the processor is idle, but wouldn't be if the disk wasn't so slow. The "wait" process accumulates all processor cycles in both idle/id and iowait/wa.

Also, the "wait" process runs continuously from boot, and the processor time reflects this.

- Rod
 
With Oracle in the rootvg there are too many writes which is why hdisk0 shows ~100% in activity. The filesystem is being hit too much with log writes and probably database writes at the same time. You also have blocked processes which is probably because of the backup on io when the disk is being hit too much. the scan/freed rate is too high at times too.

In reality, the best you could do is use migratelp to rearrange you disks using a only rootvg and internal disks. That way when you look at the lv's, if one is spread across say, the inner-edge, center, inner-middle, then you should consolidate the lv to one, say center. You should have your paging in center, and the log file should be in the center region too, since this is the fastest area of the disk. You might want to put Oracle if it is sequential IO on the outer-edge region of the disk.

Hope that helps.
 
Thank you all for your valuable replies

mbourne2006,

I'm not using raw LVs. They are lvm filesystems. and I moved the lv from hdisk1 to hdisk0 so wouldn't that arrange the lvs on the new place? I thought they will be defragmented by donig this!

RodKnowlton,

Thanks for the continous support. Now i understand it better! so it is not a bottlenick on the CPU but it is on the disk!

kHz,

What you said is very interesting my friend! just to mention that hdisk0 wasn't having 100% activity! it was on hdisk1 then i moved it (using migratepv) to hdisk0 and the same poor performance carried with it unforetunatly! This is the distribution of my lvs:

Code:
# lslv -l lv1
engp-lv1:/ora1/oradata/fs1
PV                COPIES        IN BAND       DISTRIBUTION  
hdisk0            030:000:000   100%          030:000:000:000:000

Code:
# lslv -l lv2
engp-lv2:/ora2/oradata/fs2
PV                COPIES        IN BAND       DISTRIBUTION  
hdisk0            022:000:000   100%          022:000:000:000:000


both are positioned as middle! and ranged as minimum
 
Run an ‘lspv –p hdisk0’ and lspv –p hdisk1’ and this will show the layout of the LVs on the physical volumes. If you see that engp-lv1 is spread across say the outer edge for some partitions and inner-middle for some partitions and center for some partitions, then it will cause somewhat of a performance hit than if engp-lv1 was all in the same region, say center.

Outer-middle and inner-middle have reasonably good seek times. Outer-edge and inner-edge have the slowest average seek times. Center has the fastest average seek times but has the fewest partitions available. Paging space should be at the center if you have lots of paging activity. Dump LVs are used infrequently and should be at the beginning or end of physical volumes.

If you run ‘lvmstat –v rootvg’ you can see which LV’s are at the top and then look at the placement from ‘lspv –p hdisk1’ and try to rearrage using migratelp.

You can see which specific partitions are being used by an LV on a PV doing ‘lslv –p hdisk1 engp-lv1’ and the partitions used will be like 0001 0002 0003 and those marked FREE are open for use, but those marked USED are being used by other LVs. You need this information when using migratelp so you can place the partitions where you want them and to make sure there is contiguous space on the PV.

Because you have only the two internal disks this is probably the only thing you can do to try and eek out some performance gain. It won’t cure all your problems, but it might help you get better write and read performance if you can defragment the LVs and put them on one region of disk.

Hope that helps or is clear enough.
 
kahlidaaa,

Considering the time you've already spent on this, the time you'll need to spend, and the modest gains it'll get you, you can probably make the business case to buy some more disks to put in that expansion unit. They wouldn't cost that much and would help things out a lot more than defragmenting logical volumes.

Sometimes management can forget that an employee's time has monetary value, and with storage getting cheaper by the day it's often easy to buy your way out of a disk problem.

That's my two cents, at least. :)

- Rod



IBM Certified Advanced Technical Expert pSeries and AIX 5L
CompTIA Linux+
CompTIA Security+

Wish you could view posts with a fixed font? Got Firefox & Greasemonkey? Give yourself the option.
 
Hello again

My understanding of the "migratepv" command you've used suggests to me that your LV will have been copied across on a partition-by-partition basis. This work is carried out at the LVM layer, and does not involve JFS or JFS2. To find out the state of a file on a disk, run the command:

fileplace -p $filename

and you will get a nice report to show you how contiguous your file is. Pick the biggest 5 files in the directory, and report against them. I've seen overnight backup speeds increase from ~30MB/s to ~100MB/s by sorting out heavily fragmented Oracle datafiles ...

HTH

HTH

Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
Rod is right...

Basically it comes down to seperating the oracle DB from the rootvg. You always want Oracle to be on its own VG so the OS isn't fighting for various resources between itself and the Application, which in this case is the 2-ton monster Oracle. We went ahead a bought a new SCSI Disk array system this year solely for our Oracle DB's. There are still SSA and Fiber Disk array systems out there as well. Leave the internal disks for the OS.
Also be sure to allocate enough memory for Oracle as well.
 
backup speeds increase from ~30MB/s to ~100MB/s by sorting out heavily fragmented Oracle datafiles ...

Matthew, I'd be interested in how you 'sorted out' the files - is it simply a matter of copying them elsewhere, deleting the originals and copying back? With the database closed of course!!



Alan Bennett said:
I don't mind people who aren't what they seem. I just wish they'd make their mind up.
 
Hi Ken

Yup, spot on. Our customer was fortunate enough to have sufficient SAN space available to him to close the database, copy the files to another filesystem, and copy back again into an empty FS.

The problem had been "built in" to the solution by running multiple parallel file creation processes, rather than lots of serial ones. Some of the worst files had in excess of 2M disjointed fragments.

We checked the speed of the underlying disk using dd:

dd if=/dev/hdiskx of=/dev/null bs=128k count=? (can't remember)

and then compared with a file

dd if=/oracle/<SID>/$restofpath&filename of=/dev/null bs=128k etc

With the huge difference that we saw, it led us to use fileplace to check for fragmentation.

Anyway, I hope that helps :)

Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
Many thanks, Matthew. I'll see what we can do with this. You may see another thread relating to defragfs which I opened, but that seems a route to problems without the IBM fix.

Alan Bennett said:
I don't mind people who aren't what they seem. I just wish they'd make their mind up.
 
wow,

Thank you very much guys for helping in this matter. I thought of giving up stars but i can see somebody gave one to mbourne2006 which i think he really deserve it :)

Now back to the problem:

here is the fileplace output:

Code:
# fileplace -m engp-lv1
Device: /dev/engp-lv1   Partition Size: 256 MB      Block Size = 4096
Number of Partitions: 30   Number of Copies: 1

  Physical Addresses (mirror copy 1)                                   Logical Fragment
  ----------------------------------                                   ----------------
  3932704-5898783  hdisk0      1966080 blocks 8053063680 Bytes,  100.0%    0000000-1966079

I didn't know which file inside the database data directory to choose! so i picked this file out of these:

Code:
-rw-r-----   1 oracle8i dba       178262016 Dec 01 21:13 ENGPTEMP.dbf
-rw-r--r--   1 oracle8i dba         7032832 Dec 02 16:56 ENGP_ctl1.ctl
-rw-r--r--   1 oracle8i dba         1049088 Aug 30 07:36 ENGP_log1.ora
-rw-r--r--   1 oracle8i dba        62918656 Dec 02 12:00 ENGP_rbs2.dbf
-rw-r--r--   1 oracle8i dba      2008027136 Dec 02 16:00 ENGP_sys1.dbf
-rw-r-----   1 oracle8i dba       524292096 Nov 12 13:09 ENGP_sys2.dbf
-rw-r--r--   1 oracle8i dba         4714496 Aug 30 07:37 ENGT_ctl1.ctl
-rw-r--r--   1 oracle8i dba       524292096 Dec 01 21:13 IBAPCO2.DB.dbf
-rw-r--r--   1 oracle8i dba       125833216 Aug 30 07:37 IIMP2.DB
-rw-r--r--   1 oracle8i dba      2128613376 Dec 01 21:13 PBAPCO1.DB
-rw-r--r--   1 oracle8i dba      1080037376 Dec 01 21:13 PBAPCO2.dbf
-rw-r--r--   1 oracle8i dba       471863296 Aug 30 07:38 PIMP2.DB
-rw-r-----   1 oracle8i dba        57675776 Nov 27 11:25 STATSPACK.dbf
-rw-r-----   1 oracle8i dba        20975616 Nov 27 11:14 STATSPACK_TEMP.dbf
-rw-r-----   1 oracle8i dba         5246976 Nov 27 11:35 TEST.dbf
-rw-r--r--   1 oracle8i dba        15732736 Dec 01 21:13 histor.db
-rw-r--r--   1 oracle8i dba        20975616 Dec 01 21:13 in_main.db
-rw-r--r--   1 oracle8i dba        10486272 Dec 01 21:13 logENGP10.ora
-rw-r--r--   1 oracle8i dba        10486272 Dec 02 16:00 logENGP7.ora
-rw-r--r--   1 oracle8i dba        10486272 Dec 02 05:01 logENGP8.ora
-rw-r--r--   1 oracle8i dba        10486272 Dec 01 21:13 logENGP9.ora
Code:
# fileplace -p ENGP_sys1.dbf     

File: ENGP_sys1.dbf  Size: 2008027136 bytes  Vol: /dev/engp-lv1
Blk Size: 4096  Frag Size: 4096  Nfrags: 490241 

  Physical Addresses (mirror copy 1)                                           Logical Extent
  ----------------------------------                                           ----------------
  03950240-03959519  hdisk0          9280 frags     38010880 Bytes,   1.9%    00017536-00026815
  03959552-04014399  hdisk0         54848 frags    224657408 Bytes,  11.2%    00026848-00081695
  04014432-04080128  hdisk0         65697 frags    269094912 Bytes,  13.4%    00081728-00147424
  04080160-04088350  hdisk0          8191 frags     33550336 Bytes,   1.7%    00147456-00155646
  04088352-04129280  hdisk0         40929 frags    167645184 Bytes,   8.3%    00155648-00196576
  04129312-04137502  hdisk0          8191 frags     33550336 Bytes,   1.7%    00196608-00204798
  04137504-04162048  hdisk0         24545 frags    100536320 Bytes,   5.0%    00204800-00229344
  04162080-04178462  hdisk0         16383 frags     67104768 Bytes,   3.3%    00229376-00245758
  04178464-04186624  hdisk0          8161 frags     33427456 Bytes,   1.7%    00245760-00253920
  04186656-04219422  hdisk0         32767 frags    134213632 Bytes,   6.7%    00253952-00286718
  04219424-04235776  hdisk0         16353 frags     66981888 Bytes,   3.3%    00286720-00303072
  04235808-04284958  hdisk0         49151 frags    201322496 Bytes,  10.0%    00303104-00352254
  04284960-04293120  hdisk0          8161 frags     33427456 Bytes,   1.7%    00352256-00360416
  04293152-04350494  hdisk0         57343 frags    234876928 Bytes,  11.7%    00360448-00417790
  04350496-04352383  hdisk0          1888 frags      7733248 Bytes,   0.4%    00417792-00419679
  05248608-05249887  hdisk0          1280 frags      5242880 Bytes,   0.3%    01315904-01317183
  05252448-05257567  hdisk0          5120 frags     20971520 Bytes,   1.0%    01319744-01324863
  05260128-05263967  hdisk0          3840 frags     15728640 Bytes,   0.8%    01327424-01331263
  05266528-05275487  hdisk0          8960 frags     36700160 Bytes,   1.8%    01333824-01342783
  05278048-05287007  hdisk0          8960 frags     36700160 Bytes,   1.8%    01345344-01354303
  05299808-05301087  hdisk0          1280 frags      5242880 Bytes,   0.3%    01367104-01368383
  05306208-05306239  hdisk0            32 frags       131072 Bytes,   0.0%    01373504-01373535
  05306272           hdisk0             1 frags         4096 Bytes,   0.0%    01373568
  05306592-05307487  hdisk0           896 frags      3670016 Bytes,   0.2%    01373888-01374783
  05307552-05310495  hdisk0          2944 frags     12058624 Bytes,   0.6%    01374848-01377791
  05318176-05322015  hdisk0          3840 frags     15728640 Bytes,   0.8%    01385472-01389311
  05340000-05373279  hdisk0         33280 frags    136314880 Bytes,   6.8%    01407296-01440575
  05783072-05800991  hdisk0         17920 frags     73400320 Bytes,   3.7%    01850368-01868287

Now What?

Regards,
Khalid
 
kHz, Thanks for your input :)

here is what i got:

I can see that my lv is on the outer edge! So do you think i should move it to the center!?

Code:
# lspv -p hdisk0
hdisk0:
PP RANGE  STATE   REGION        LV NAME             TYPE       MOUNT POINT
  1-1     used    outer edge    hd5                 boot       N/A
  2-38    free    outer edge                                   
 39-60    used    outer edge    engp-lv2            jfs2       /ora2/oradata/ENGP
 61-90    used    outer edge    engp-lv1            jfs2       /ora1/oradata/ENGP
 91-110   used    outer edge    engt-lv1            jfs2       /ora1/oradata/ENGT
111-118   used    outer middle  hd6                 paging     N/A
119-119   used    outer middle  loglv00             jfslog     N/A
120-138   used    outer middle  edmsbkuplv          jfs2       /edms_bkup
139-145   used    outer middle  pmdbbkuplv          jfs2       /pmdb_bkup
146-215   used    outer middle  engpbkuplv          jfs2       /engp_bkup
216-219   used    outer middle  dm5test             jfs2       /usr/cimaget
220-220   used    center        hd8                 jfs2log    N/A
221-221   used    center        hd4                 jfs2       /
222-228   used    center        hd2                 jfs2       /usr
229-230   used    center        hd9var              jfs2       /var
231-232   used    center        hd3                 jfs2       /tmp
233-233   used    center        hd1                 jfs2       /home
234-235   used    center        hd10opt             jfs2       /opt
236-236   used    center        dm5test             jfs2       /usr/cimaget
237-238   used    center        edmsbkuplv          jfs2       /edms_bkup
239-244   used    center        engt-lv1            jfs2       /ora1/oradata/ENGT
245-328   free    center                                       
329-437   free    inner middle                                 
438-445   used    inner edge    hd7                 sysdump    N/A
446-546   free    inner edge

Code:
# lslv -p hdisk0 engp-lv1
hdisk0:engp-lv1:/ora1/oradata/ENGP
USED   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE       1-10
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE      11-20
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE      21-30
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   USED   USED      31-40
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      41-50
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      51-60
0001   0002   0003   0004   0005   0006   0007   0008   0009   0010      61-70
0011   0012   0013   0014   0015   0016   0017   0018   0019   0020      71-80
0021   0022   0023   0024   0025   0026   0027   0028   0029   0030      81-90
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      91-100
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     101-110

USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     111-120
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     121-130
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     131-140
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     141-150
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     151-160
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     161-170
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     171-180
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     181-190
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     191-200
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     201-210
USED   USED   USED   USED   USED   USED   USED   USED   USED            211-219

USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     220-229
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     230-239
USED   USED   USED   USED   USED   FREE   FREE   FREE   FREE   FREE     240-249
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     250-259
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     260-269
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     270-279
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     280-289
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     290-299
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     300-309
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     310-319
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            320-328

FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     329-338
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     339-348
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     349-358
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     359-368
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     369-378
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     379-388
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     389-398
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     399-408
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     409-418
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     419-428
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            429-437

USED   USED   USED   USED   USED   USED   USED   USED   FREE   FREE     438-447
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     448-457
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     458-467
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     468-477
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     478-487
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     488-497
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     498-507
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     508-517
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     518-527
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     528-537
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            538-546

Code:
# lslv -p hdisk0 engp-lv2
hdisk0:engp-lv2:/ora2/oradata/ENGP
USED   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE       1-10
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE      11-20
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE      21-30
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   0001   0002      31-40
0003   0004   0005   0006   0007   0008   0009   0010   0011   0012      41-50
0013   0014   0015   0016   0017   0018   0019   0020   0021   0022      51-60
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      61-70
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      71-80
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      81-90
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED      91-100
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     101-110

USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     111-120
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     121-130
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     131-140
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     141-150
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     151-160
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     161-170
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     171-180
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     181-190
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     191-200
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     201-210
USED   USED   USED   USED   USED   USED   USED   USED   USED            211-219

USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     220-229
USED   USED   USED   USED   USED   USED   USED   USED   USED   USED     230-239
USED   USED   USED   USED   USED   FREE   FREE   FREE   FREE   FREE     240-249
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     250-259
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     260-269
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     270-279
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     280-289
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     290-299
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     300-309
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     310-319
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            320-328

FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     329-338
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     339-348
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     349-358
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     359-368
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     369-378
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     379-388
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     389-398
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     399-408
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     409-418
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     419-428
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            429-437

USED   USED   USED   USED   USED   USED   USED   USED   FREE   FREE     438-447
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     448-457
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     458-467
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     468-477
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     478-487
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     488-497
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     498-507
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     508-517
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     518-527
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE     528-537
FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE   FREE            538-546

Regards,
Khalid
 
Right then - the fileplace output for "ENGP_sys1.dbf" is showing that the file is spread over ~ 500000 non-contiguous lumps within the filesystem.

There isn't a postage stamp small enough for me to fill it with what I know about Oracle (spelling it correctly is considered a major accomplishment!), so I'm not going to pretend I know how to deal with this from the Oracle perspective.

From the point of view of AIX filesystems, however, it's quite simple. You either need a new filesystem into which you are going to move the original file, or you need a temporary storage area into which you can copy the file while you "tidy up" the original location, then copy the file back again.

The key point is this: if you choose to store different types of files (data files, log files, temporary files, control files, lock files, etc) in the same filesystem, then it is likely that well-ordered files will become fragmented over time due to the dynamic nature of the files.

In the land of IBM Tivoli Storage Manager, which is where I'm more comfortable, we control disk usage very tightly. Typically we will store database volumes on a different filesystem to log volumes, and we will fix the size and number of these volumes at build time. We will not allow the database to grow itself or its log dynamically, because we want to maintain control over the state of the files. If we find that the database or log needs more space, then we will create new volumes in the appropriate filesystem, growing the filesystem if necessary, but maintaining the contiguous nature of the files.

HTH

Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
I haven't done the math, but if there are enough partitions, and it looks like it from a quick glance, I would move the loglv00 and your engt-lv1, engp-lv1, engp-lv2, and paging lv to the center regions of the disk.

This won't solve all of you problems, but given that the center region of the disk is the fastest, the arm won't have to move back and forth across the disk as much when reading and writing.

The partitions can be moved using migratelp. When you begin the move you will have to run it for however many partitions exist for that specific lv. For example, if there are 4 partitions in an lv then you will run migratelp four times for that lv. Also, if you run 'lspv -p hdisk0' after you move one partition you will see the lv listed twice - in the old location (say outer-edge) and in the new region (say center). After all four partitions have been moved the lv will only show up in the new center region. Do a man on migratelp for the syntax but it is straightforward.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top