Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Deny varyonvg

Status
Not open for further replies.

TSch

Technical User
Jul 12, 2001
557
DE
Hi folks,

I'm in need of some creativity input ...

Is there any way to write a script or do something else that prevents any volume group other than rootvg to be varied on at system boot (even if it's set to autovaryon = yes) AND afterwards ?

Regards
Thomas
 
Why do you want to do that?

Usually setting autovaryon to false is used for that!

Regards,
Khalid
 
If the disks are SAN LUNs you may 'unpresent' the LUNs for that server or present them read-only...

How about unconfiguring the disk devices for all non-rootvg disks? Of course it won't stop any self-respecting admin from re-running cfgmgr and discovering the disks again in order to activate the VGs...

Of course it is possible to set up a cfgmgr wrapper script that runs the real cfgmgr program as specified and then unconfigures any devices you have listed in a file perhaps? But again, any real admin will/should be able to get around that...



HTH,

p5wizard
 
Hi folks,

time for the exact problem description :)

For several systems we're using SAN disks (Fibre Channel) connected to 2 VIO Servers (in case 1 VIO Server fails). So both VIO servers can see the same disks. For this scenario to work we have to set the "reserve_lock" parameter to "no". The disks themself are made available to a production system as virtual scsi device. So far so good ...

The problem occurs as soon as we set up another pair of VIO Servers who can see the same disks and make those disks available (again as virtual scsi devices) to the BACKUP system of the above mentioned Production System.

At this point the Volume Group the disks belong to can be set to varyon on the Production as well as on the Backup site AT THE SAME TIME.

What we need now is to make sure that this won't happen accidently at system boot (e.g. because some settings were made in the background during configuration without the admin noticing) or during administrating the Backup System while the VG is active on the Production system.

Yeah, I know that HACMP would be the choice here ;-) but in the past this has proven to be extremely complicated and unstable so we built up the configuration mentioned above which in fact was working perfectly as long as we connected the FC disks directly to Prod. and Backup System WITHOUT putting the VIO Servers between ...

Regards
Thomas
 
why not setting the autovaryon to false for the VIO client's vgs?
 
That would solve the problem at system boot ...

But whenever you enter "varyonvg datavg" it will work.

And nother thins is that if we have to e.g. built up a new backup system the setting will be back to "yes" and if we simply forget to remember setting it to back to "no" the VG will be varied on as soon as we reboot the system the next time ...
 
By the word forget i understand that this is a human mistake right? So all you need is a proper procedure for mananging changes! In addition, if one forgets or makes a mistake on one VIO client, what's the possibility this would happen to the other VIO clients? I beleive its remote!

Regards,
Khalid
 
Here's what I would do:

On the second (backup) server, set up different virtualSCSI channels for the backup systems for rootvg and datavgs.
In all a VIO client will have 4 vSCSI channels, 2 to each VIO server. Per VIO server one channel for rootvg virtualized disks and one channel for datavg virtualized disks. (It's my preferred setup anyway).

Start the backup LPARs without the vSCSI client adapters for the datavg disks present in their default profile. Don't delete the vtscsi devices on the VIO servers - they just dead-end on their vhost adapter because that vhost is not yet coupled to its designated vscsi adapter on the VIO client.

The LPAR will come up, won't find the paths to the datavg disks and have those disks as "Defined" and the VGs will not vary on.

When you need a backup LPAR to take over, DLPAR-add the vSCSI adapters and run cfgmgr on the backup LPAR (or restart it with a "takeover" profile with the vSCSI adapters present). The paths to the datavg disks will be discovered, the disks will be available and the datavgs will vary on.



HTH,

p5wizard
 
While valid your method seems to be overkill (Hardware & set-up time wise).

Have you though about using snapshot and a nim server instead?




Mike

"Whenever I dwell for any length of time on my own shortcomings, they gradually begin to seem mild, harmless, rather engaging little things, not at all like the staring defects in other people's characters."
 
It's not that big an overkill to have standby LPARs with their own virtualized SAN boot device just idling at a remote location on a server that runs other LPARs.

At takeover time you only have to blow up those LPARs' CPU and memory entitlements, possibly victimizing other LPARs (or by using on/off capacity on demand) and activating volume groups to fire up the apps...

In fact you can even have those standby LPARs turned off, without using any CPU cycles or memory. Then you just have to activate them on takeover time.


HTH,

p5wizard
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top