Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RS6000/F40 Six Pack PN

Status
Not open for further replies.

wonky1

MIS
Oct 12, 2011
17
US
Inherited an RS6000/F40 running a corporate wide Manufacturing app on AIX 4.3.3. Have been tasked with adding disks to the system. It has two six packs that are full and I need to add another. Anyone know the PN or FRU's for the bits I need (Cage, Backplane, cables)? I can find all kinds of Option numbers, but not the PN. BTW: I'm adding drives because mgmt won't let me swap the installed drives for larger ones. Any help would be appreciated.
 
This old stuff is all in books, here:

Check the "RS/6000 7025 F40 Series" link for the system, part (FRU) numbers are in the back of the service guide.

Also check out the contents links on the left for installable options and general service options.

Shout if you get stuck.

Duke.
 
Duke,

Appreciate the pointer. I didn't have the Service Guide.

Do you know if I'd need to buy a separate SCSI card for the new 6pack? Or, even better, if I could use an external SCSI box (we have a spare Dell 210S array) without cracking into the insides of this box?

Franz
 
I don't think you can have more than two six packs in an F40
Have you considered other options such as a couple of SSA cards and external tower. Really cheap to pick up on the 2nd user market these days
 
haexpert,

I'm moving in that direction. If I can convince my boss I'll buy the bits I need for an external and then do a 'refresh' on the internal drives. Don't know how long this box is supposed to run.

Thanks for the suggestion.

franz
 
service guide clearly shows 3 six pack bays but the system board only shows 1 internal and 1 external scsi connector so there may be only one scsi controller, unless other PCI adapters are already installed.

lsdev -Cs scsi
will show the scsi stuff installed.

External may be easier and cheaper because original parts are likely to be expensive or no longer available.

The PCI adapter placement guide will tell you what cards are supported and which slot to put them in if you need to add another scsi or ssa adapter:
 
DukeSSD,

There are two other SCSI cards in the box that have external connectors which was why I thought of trying to use the spare Dell 210S. I'm not in the office today and don't have remote access, but I'll get an equipment listing on Monday. Do you know if it is even possible to use the Dell drive box on the IBM?

Odd thing I discovered last night was that even though both six packs are fully loaded, only 7 drives are actually in use -- and there is NO mirroring. I had assumed that all the drives were in use as they were running, but the system only knows about 7. It looks like somebody was 'storing' the drives or ... who knows. This puts another wrinkle in things. Now I've got to figure out what the distribution is on the drives and how to get this mess straightened out.


franz
 
To connect the external storage you'll need to check if either of the external cards are the same flavour of scsi as the Dell - and find a suitable cable.

Pretty much any scsi drive should work OK with AIX if you can find a suitable connection or adapter to connect them to.

Could you have a raid adapter?
If so then it may be presenting some drives as an array and AIX will only see one hdisk for each array.
 
I was going to have access to the machine over the weekend, the Northeast 'snow event' and loss of power to all our systems made life a bit miserable yesterday.

This AM I pulled the 5 drives that weren't doing anything out of the machine and it doesn't look like the 7 drives are part of a raid. This is the output of lspv:

ibm:/pfs0/PSC.0# lspv
hdisk0 0050526523ee2d57 rootvg
hdisk1 005052659dfdbc98 pvg2
hdisk2 005052659dff49c7 pvg3
hdisk3 005052659e0143d4 pvg4
hdisk4 005052659e0265df pvg5
hdisk5 005052659e043c75 pvg6
hdisk6 005052659e06314a pvg7

(there is also a pvg0 and pvg1 on hdisk0)

and lscfg lists two scsi cards controlling the above hd

+ scsi1 04-B0 Wide SCSI I/O Controller
+ hdisk0 04-B0-00-8,0 16 Bit LVD SCSI Disk Drive (4500 MB)
+ hdisk1 04-B0-00-9,0 16 Bit LVD SCSI Disk Drive (4500 MB)
+ hdisk2 04-B0-00-10,0 16 Bit LVD SCSI Disk Drive (4500 MB)
+ hdisk3 04-B0-00-11,0 16 Bit LVD SCSI Disk Drive (4500 MB)

and + scsi5 04-02 Wide/Fast-20 SCSI I/O Controller
+ hdisk4 04-02-00-8,0 16 Bit LVD SCSI Disk Drive (4500 MB)
+ hdisk5 04-02-00-9,0 16 Bit LVD SCSI Disk Drive (4500 MB)
+ hdisk6 04-02-00-10,0 16 Bit LVD SCSI Disk Drive (4500 MB)

An lspv gives strange (to me) results:

ibm:/pfs0/PSC.0# lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 0050526523ee2d57 VG IDENTIFIER 0050526523ee3629
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 13
TOTAL PPs: 537 (4296 megabytes) VG DESCRIPTORS: 2
FREE PPs: 159 (1272 megabytes)
USED PPs: 378 (3024 megabytes)
FREE DISTRIBUTION: 15..00..00..36..108
USED DISTRIBUTION: 93..107..107..71..00

+++++++++++++++++++++++++++++++++++++++

ibm:/pfs0/PSC.0# lspv hdisk1
PHYSICAL VOLUME: hdisk1 VOLUME GROUP: pvg2
PV IDENTIFIER: 005052659dfdbc98 VG IDENTIFIER 005052659dfdc1cf
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 537 (4296 megabytes) VG DESCRIPTORS: 2
FREE PPs: 490 (3920 megabytes)
USED PPs: 47 (376 megabytes)
FREE DISTRIBUTION: 108..60..107..107..108
USED DISTRIBUTION: 00..47..00..00..00

.
.
.

The rest of the hdisks give pretty much the same reading as hdisk1. With the exception of hdisk0 it looks like were only using approx 300MB of each drive?????


The more I look at this the stranger the setup looks (at least to me). I'm learning AIX and RS6000 as I go along and some of this is not making any sense. I'm still being told to add drives, but from what I see, the only reason to add them would be for mirroring. Am I reading this right?

franz
 
Yup, looks like you're reading it right so far.

AIX LVM will not assign the whole disk, like some other operating systems, to logical volumes or even file systems unless you tell it to giving you control over how it is assigned.

Seems like the last couple of disks in one bay and the last three drives in the other bay - check which bay as I cannot tell from the location codes - are simply not used.

I'd expect the other disks to show up with hdisk numbers though, even if they are not part of a volume group, did you omit them from the lscfg output?

I'd immediately mirror at least rootvg onto a disk in the other bay / on the other adapter, and then consider your other options.

You can only have a physical disk assigned to one volume group at a time so I guess your "(there is also a pvg0 and pvg1 on hdisk0)" pvgo and pvg1 are logical volumes in rootvg rather than seperate volume groups, maybe a hangover from an earlier incarnation.
 
Correct: the physical layout is 4 HD in on bay and 3 in the other. The other drives I removed when I found that they weren't part of a raid or mirror array. The lspv was showing hdiskNN blank blank for the 5 'spare' drives.

As far as the pvg0 and pvg1: sorry, my confusion (physical/logical still getting a bit muddled in my head, I should have said plv0 and plv1. They appear to be logical volumes that are used by portions of the UniVerse app. Based on what lsvg -il shows, the app appears to be spread over all 7 disks as the references in the UV files reference things like '/pfs0, /pfs7, etc'. The lsvg is:

ibm:/pfs0/PSC.0# lsvg | lsvg -il

rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 1 1 closed/syncd N/A
hd6 paging 128 128 1 open/syncd N/A
hd8 jfslog 1 1 1 open/syncd N/A
hd4 jfs 47 47 1 open/syncd /
hd2 jfs 62 62 1 open/syncd /usr
hd9var jfs 17 17 1 open/syncd /var
hd3 jfs 11 11 1 open/syncd /tmp
hd1 jfs 1 1 1 open/syncd /home
lv00 jfs 6 6 1 open/syncd /usr/welcome_arcade
lv01 jfs 3 3 1 open/syncd /usr/welcome
plv0 jfs 62 62 1 open/syncd /pfs0
plv1 jfs 10 10 1 open/syncd /pfs1
u1lv jfs 29 29 1 open/syncd /u1

pvg2:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv2 jfs 46 46 1 open/syncd /pfs2
loglv00 jfslog 1 1 1 open/syncd N/A

pvg3:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv3 jfs 46 46 1 open/syncd /pfs3
loglv01 jfslog 1 1 1 open/syncd N/A

pvg4:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv4 jfs 46 46 1 open/syncd /pfs4
loglv02 jfslog 1 1 1 open/syncd N/A

pvg5:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv5 jfs 46 46 1 open/syncd /pfs5
loglv03 jfslog 1 1 1 open/syncd N/A

pvg6:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv6 jfs 77 77 1 open/syncd /pfs6
loglv04 jfslog 1 1 1 open/syncd N/A

pvg7:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
plv7 jfs 29 29 1 open/syncd /pfs7
loglv05 jfslog 1 1 1 open/syncd N/A

My understanding of AIX mirroring is that I install the new hdisk, run cfgmgr to let the OS recoginze it and assign an hdisk # to it and then run extendvg roogvg newdisk#. Do I have to create a file system first or will the OS handle that in the context of the extendvg command? And, can I do this on a 'live' system or do I need to kick the users off?

My reading seems to indicate that the version of AIX we have (4.3.3) will turn of 'quoruming'. I don't have another AIX machine to test any of this against and I really don't want to screw this up.

As to my options: Now I'm not certain that I need to add an external tower at all. Based on what you confirmed, most of the currently available disk space is unused and I could simply allocate more space to the individual logical volumes. I'll have to look at the actual disk usage of the individual hdisks and maybe I can 'condense' the 7 disks down to 6 and then mirror all six within the two bays that I have. Does that make any sense? The machine was rebooted this weekend because of the power failures, but the iostat since Sunday is:

ibm:/pfs0/PSC.0# iostat -d

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 2.6 20.0 3.5 822337 1166929
hdisk1 0.3 5.0 0.6 450059 47740
hdisk2 0.1 2.1 0.2 186291 20056
hdisk3 0.6 8.3 1.0 604355 227664
hdisk4 0.2 3.2 0.4 282707 37580
hdisk5 0.0 1.7 0.1 157691 9808
hdisk6 0.1 1.6 0.1 118651 37284
cd0 0.0 0.0 0.0 0 0

The 5&6 drives really aren't getting that much exercise as far as I can tell.


 
After the extendvg to add in the new disk to the volume group you just need to run morrorvg... and AIX will create the lvs and filesystems for you and then copy all the data across and sync them up.

It is normal to turn quorum off in mirrored volume groups otherwise a disk failure can take the volume group offline.
As long as your strictness is enforced, which means each mirror copy must reside on a different physical volume - yes you can have two mirror copies on the same physical volume if strictness is not enforced - then you should disable quorum for this reason.

The disks all look fairly quiet and if you can squeeze it all onto six disks then mirroring acros the two scsi adapters and two six packs does look like the way to go.
 
I think I've got this clear now-

Mirror the rootvg:
1. Install hdiskN in the sixpack that doesn't contain rootvg
2. Run cfgmgr to get system to recognize hdiskN
3. extendvg rootvg hdiskN
4. mirrorvg -S rootvg hdiskN
5. bosboot -ad /dev/hdisk0
6. bootlist -m normal hdisk0 hdisk1 cd0

Questions:
Can all this be done with users on the system or is the mirrorvg going to slow things down significantly?
Should I do bootlist on both normal and service mode?

If this sequence is correct I'll try it tonight and see if any errors kick out. BTW: thanks for your patience - and the education
 
Looks good to me.

I don't bother with anything but normal mode, if it gets to that stage you'll be at the console, probably with an AIX install CD in your hand in case you need to do a maintenance or debug boot, and you can always use SMS to boot either disk.

There will obviously be some performance hit as the system copies the whole of hdisk0 to hdisk1 whilst also keeping up with any changes going on on the hdisk0 so try to do this when most people have got off of the system and outside of any busy period like batch processing or backups.

Talking of backups... take a couple of mksysbs before you start, ideally onto bootable media like an internal or at least native AIX tape drive, just in case it all goes wrong and you need to put it back together in a hurry.

Want tape help for the RS6000:

Couple of backups? - tape and tape drives are not the most reliable media so two is safer than one, it wouldn't hurt to clean the drive with a new cleaning tape and use new tapes for the backupe too.

All goes wrong? - even if you reboot frequently, and so fsck runs, it is not a complete guarantee that all of the data in rootvg is good and readable, so you may run into problems when doing the mirror if AIX / lvm reads a bad part of disk or a bad filesystem that has so far gone undetected because it is rearly read and never checked.

Be safe, be backed up - twice.
 
Thanks! I know what I'll be doing this weekend. I'll give all this a shot and we'll see what happens.
 
Well, that didn't work out as planned. Mksysb kept failing.

We use cpio and native AIX backup for the daily tapes and I figured mksysb would probably have no great issues. Shows what I know. Mksysb -X -i kept throwing errors that eventually led to the conclusion that it didn't have enough room in /tmp to complete. Got rid of a bunch of useless files in /tmp and reduced used space to:

ibm:/pfs0/PSC.0# df /dev/hd3
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd3 180224 120312 34% 169 1% /tmp

That took most of the weekend as I had to check what created each file and whether or not it could be treated as a 'throw-away'. Found some that weren't and ended up redirecting them elsewhere. After trying again I still got a failure and gave up.

Is there a way to determine how much room mksysb will need in /tmp?

franz
 
Thanks! I'll give that a try tonight. I'm still seeing files created in /tmp that don't look like they belong there, but the number is significantly smaller than it was before.
 
That took longer than expected. We now have a mirrored rootvg and I'll now try to consolidate the file systems on hdisk5 and hdisk6 in order to get the count down to 6 drives so I can mirror within the two six packs we have. Many thanks for you help.

One further Q and I think I'll be set to finish this over Thanksgiving: Since you can only have one vg per physical drive can I move the stuff currently in pvg7 (plv7 and loglv05) from hdisk6 to hdisk5 using cplv? The way I read the docs it sounds like this will do the trick.

 
Yep, you can specify another VG with the copy so as long as there is space this looks like the way to go.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top