Hi folks,
I have the following effect: 2 same type of scsi disks hdisk0 and hdisk1 mirroring each other by LV, and
the Performance Diagnostic Facility 1.0 output shows the following:
...
- Phys. vol. hdisk0 is significantly busier than others
volume cd0, mean util. = 0.00 %
volume hdisk0, mean util. = 4.26 %
volume hdisk1, mean util. = 2.85 %
volume hdisk2, mean util. = 0.31 %
volume hdisk3, mean util. = 0.70 %
volume hdisk4, mean util. = 0.00 %
[based on 30 measurements, each consisting of 20 2-second samples]
Yes, it is measured only on a few samples, but there should be no difference at all since no other lv's are used:
root@fnet3:/EBRbackup> lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 64 128 2 open/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 128 256 2 open/syncd /usr
hd9var jfs 16 32 2 open/syncd /var
hd3 jfs 32 64 2 open/syncd /tmp
hd1 jfs 4 8 2 open/syncd /home
root@fnet3:/EBRbackup> lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 0053d10a60228116
VG STATE: active PP SIZE: 16 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1084 (17344 megabytes)
MAX LVs: 256 FREE PPs: 590 (9440 megabytes)
LVs: 8 USED PPs: 494 (7904 megabytes)
OPEN LVs: 7 QUORUM: 1
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
root@fnet3:/EBRbackup> lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a60227e750000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 2
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00
root@fnet3:/EBRbackup> lspv hdisk1
PHYSICAL VOLUME: hdisk1 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a652bea900000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 1
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00
Yes, the absolute difference is not much but compared to each other it is 67%! Any idea? Even the mapping is equal:
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk0|tail
hdisk0:347 hd2:120:1
hdisk0:348 hd2:121:1
hdisk0:349 hd2:122:1
hdisk0:350 hd2:123:1
hdisk0:351 hd2:124:1
hdisk0:352 hd2:125:1
hdisk0:353 hd2:126:1
hdisk0:354 hd2:127:1
hdisk0:355 hd2:128:1
hdisk0:356-542
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk1|tail
hdisk1:347 hd2:120:2
hdisk1:348 hd2:121:2
hdisk1:349 hd2:122:2
hdisk1:350 hd2:123:2
hdisk1:351 hd2:124:2
hdisk1:352 hd2:125:2
hdisk1:353 hd2:126:2
hdisk1:354 hd2:127:2
hdisk1:355 hd2:128:2
hdisk1:356-542
I have the following effect: 2 same type of scsi disks hdisk0 and hdisk1 mirroring each other by LV, and
the Performance Diagnostic Facility 1.0 output shows the following:
...
- Phys. vol. hdisk0 is significantly busier than others
volume cd0, mean util. = 0.00 %
volume hdisk0, mean util. = 4.26 %
volume hdisk1, mean util. = 2.85 %
volume hdisk2, mean util. = 0.31 %
volume hdisk3, mean util. = 0.70 %
volume hdisk4, mean util. = 0.00 %
[based on 30 measurements, each consisting of 20 2-second samples]
Yes, it is measured only on a few samples, but there should be no difference at all since no other lv's are used:
root@fnet3:/EBRbackup> lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 64 128 2 open/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 128 256 2 open/syncd /usr
hd9var jfs 16 32 2 open/syncd /var
hd3 jfs 32 64 2 open/syncd /tmp
hd1 jfs 4 8 2 open/syncd /home
root@fnet3:/EBRbackup> lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 0053d10a60228116
VG STATE: active PP SIZE: 16 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1084 (17344 megabytes)
MAX LVs: 256 FREE PPs: 590 (9440 megabytes)
LVs: 8 USED PPs: 494 (7904 megabytes)
OPEN LVs: 7 QUORUM: 1
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 32
root@fnet3:/EBRbackup> lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a60227e750000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 2
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00
root@fnet3:/EBRbackup> lspv hdisk1
PHYSICAL VOLUME: hdisk1 VOLUME GROUP: rootvg
PV IDENTIFIER: 0053d10a652bea900000000000000000 VG IDENTIFIER 0053d10a60228116
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 542 (8672 megabytes) VG DESCRIPTORS: 1
FREE PPs: 295 (4720 megabytes)
USED PPs: 247 (3952 megabytes)
FREE DISTRIBUTION: 108..00..00..78..109
USED DISTRIBUTION: 01..108..108..30..00
Yes, the absolute difference is not much but compared to each other it is 67%! Any idea? Even the mapping is equal:
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk0|tail
hdisk0:347 hd2:120:1
hdisk0:348 hd2:121:1
hdisk0:349 hd2:122:1
hdisk0:350 hd2:123:1
hdisk0:351 hd2:124:1
hdisk0:352 hd2:125:1
hdisk0:353 hd2:126:1
hdisk0:354 hd2:127:1
hdisk0:355 hd2:128:1
hdisk0:356-542
root@fnet3:/fnsw/local/logs/perf> lspv -M hdisk1|tail
hdisk1:347 hd2:120:2
hdisk1:348 hd2:121:2
hdisk1:349 hd2:122:2
hdisk1:350 hd2:123:2
hdisk1:351 hd2:124:2
hdisk1:352 hd2:125:2
hdisk1:353 hd2:126:2
hdisk1:354 hd2:127:2
hdisk1:355 hd2:128:2
hdisk1:356-542