Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How can 1 mirror member be more utilized than the other?

Status
Not open for further replies.

fd

MIS
Apr 24, 2001
7
DE
Hi folks,

I have the following effect: 2 same type of scsi disks hdisk0 and hdisk1 mirroring each other by LV, and

the Performance Diagnostic Facility 1.0 output shows the following:
...
- Phys. vol. hdisk0 is significantly busier than others
volume cd0, mean util. = 0.00 %
volume hdisk0, mean util. = 4.26 %
volume hdisk1, mean util. = 2.85 %
volume hdisk2, mean util. = 0.31 %
volume hdisk3, mean util. = 0.70 %
volume hdisk4, mean util. = 0.00 %
[based on 30 measurements, each consisting of 20 2-second samples]
Yes, it is measured only on a few samples, but there should be no difference at all since no other lv's are used:

root@fnet3:/EBRbackup> lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 64 128 2 open/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 128 256 2 open/syncd /usr
hd9var jfs 16 32 2 open/syncd /var
hd3 jfs 32 64 2 open/syncd /tmp
hd1 jfs 4 8 2 open/syncd /home

root@fnet3:/EBRbackup> lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 0053d10a60228116
VG STATE: active PP SIZE: 16 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1084 (17344 megabytes)
MAX LVs: 256 FREE PPs: 590 (9440 megabytes)
LVs: 8 USED PPs: 494 (7904 megabytes)
OPEN LVs: 7 QUORUM: 1
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
root@fnet3:/EBRbackup> lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a60227e750000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 2
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00
root@fnet3:/EBRbackup> lspv hdisk1
PHYSICAL VOLUME: hdisk1 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a652bea900000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 1
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00

Yes, the absolute difference is not much but compared to each other it is 67%! Any idea? Even the mapping is equal:
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk0|tail
hdisk0:347 hd2:120:1
hdisk0:348 hd2:121:1
hdisk0:349 hd2:122:1
hdisk0:350 hd2:123:1
hdisk0:351 hd2:124:1
hdisk0:352 hd2:125:1
hdisk0:353 hd2:126:1
hdisk0:354 hd2:127:1
hdisk0:355 hd2:128:1
hdisk0:356-542
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk1|tail
hdisk1:347 hd2:120:2
hdisk1:348 hd2:121:2
hdisk1:349 hd2:122:2
hdisk1:350 hd2:123:2
hdisk1:351 hd2:124:2
hdisk1:352 hd2:125:2
hdisk1:353 hd2:126:2
hdisk1:354 hd2:127:2
hdisk1:355 hd2:128:2
hdisk1:356-542
 
If you run iostat and compare the Kb_read for hdisk0 and hdisk1 you will see a difference. When a read request is made it is made to both disks and the one that respond first gets the 'hit' or 'count'.

This will generally be hdisk0 - may be something to do with its position in the SCSI chain.

If you look at the Kb_write values they will both be very close but the reads can , and will be, different. This leads to different %BUSY figures being calculated.

Hope this helps.


 
Hi kingjx,

thx a lot - very interesting, the iostat tip is good and logical, of course - only the written data has to be synced identically. So in that configuration - if u had different cache sizes u should always put it to a lower scsi address than the mirror disk to ensure higher reading speed...

iostat

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 11.0 0.9 0.9 94.9 3.4

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk1 0.0 0.3 0.1 50803 1631425
hdisk0 0.0 0.4 0.1 163556 1631425
hdisk4 0.0 3.0 0.1 10178441 4872730
hdisk3 0.3 10.5 2.5 51983786 687069
hdisk2 0.0 1.4 0.2 3311257 3555228
cd0 0.0 0.0 0.0 0 0

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top