Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to Add/Move an existing file system to resource group

Status
Not open for further replies.

James008

Technical User
Dec 10, 2010
2
US
Need help with adding/moving a normal file system to a 2 different Resource Groups in 2 Node Cluster.

There are two nodes clusdb1 and clusdb2, having Resource Groups consdb1 and consdb2, respectively.

I have to move a file system "/archive" which is on clusdb1 node into a consdb1 resource group.

And, there is another file system /Archive2 on clusdb2 node which I need to move into a resource group consdb2.

The file system /archive and /archive2 are normal file system on the clusdb1 and clusdb2 nodes.

I mean to ask after I create a new shared volume group, shared logical volume then Can I add this existing file systems to into shared volume group then to the existing resource groups consdb1 and consdb2.

Is there any trick method to do it.

Please advice, thanks

Here are the details of the file systems (archive and archive2)
Clusdb1 Node

# lsfs | grep archive
/dev/archlv -- /archive jfs 115343360 rw yes no

/etc/filesystems info

/archive:
dev = /dev/archlv
vfs = jfs
log = /dev/loglv02
mount = true
check = false
options = rw
account = false
# lsvg
rootvg
apexbinvg
forumbinvg
frsbinvg
apexvg
ercmbinvg
ercmvg
archivevg
oraclevg
busvg
tmsvg
frsvg
forumvg
web2vg
apxappsvg

# lsvg archivevg
VOLUME GROUP: archivevg VG IDENTIFIER: 000cebfe0000d6000000010ff3180317
VG STATE: active PP SIZE: 64 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1816 (116224 megabytes)
MAX LVs: 256 FREE PPs: 935 (59840 megabytes)
LVs: 2 USED PPs: 881 (56384 megabytes)
OPEN LVs: 2 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 2032 MAX PVs: 16
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

# lsvg -l archivevg
archivevg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
archlv jfs 880 880 1 open/syncd /archive
loglv02 jfslog 1 1 1 open/syncd N/A

# lslv archlv
LOGICAL VOLUME: archlv VOLUME GROUP: archivevg
LV IDENTIFIER: 000cebfe0000d6000000010ff3180317.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 4096 PP SIZE: 64 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 880 PPs: 880
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /archive LABEL: /archive
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO

# lslv -l archlv
archlv:/archive
PV COPIES IN BAND DISTRIBUTION
hdisk114 880:000:000 41% 363:363:154:000:000

# lspv -l hdisk114
hdisk114:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
loglv02 1 1 01..00..00..00..00 N/A
archlv 880 880 363..363..154..00..00 /archive

----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Clusdb2 Node
# lsfs | grep archive2
/dev/archivelv2 -- /archive2 jfs 56623104 rw yes no

/etc/filesystems info

/archive2:
dev = /dev/archivelv2
vfs = jfs
log = /dev/hd8
mount = true
options = rw
account = false

# lslv archivelv2
LOGICAL VOLUME: archivelv2 VOLUME GROUP: rootvg
LV IDENTIFIER: 000cae240000d60000000121700bce66.14 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 128 megabyte(s)
COPIES: 2 SCHED POLICY: parallel
LPs: 216 PPs: 432
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /archive2 LABEL: /archive2
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO

# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 000cae240000d60000000121700bce66
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1092 (139776 megabytes)
MAX LVs: 256 FREE PPs: 362 (46336 megabytes)
LVs: 16 USED PPs: 730 (93440 megabytes)
OPEN LVs: 13 QUORUM: 1 (Disabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: no
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

# lslv -l archivelv2
archivelv2:/archive2
PV COPIES IN BAND DISTRIBUTION
hdisk1 216:000:000 30% 109:066:000:000:041
hdisk0 216:000:000 0% 000:000:019:109:088

# lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd5 1 1 01..00..00..00..00 N/A
hd6 16 16 00..16..00..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 2 2 00..01..01..00..00 /
hd2 32 32 00..00..32..00..00 /usr
hd9var 8 8 00..00..08..00..00 /var
hd3 12 12 00..00..12..00..00 /tmp
hd1 1 1 00..00..01..00..00 /home
hd10opt 18 18 00..00..18..00..00 /opt
paging00 16 16 00..00..16..00..00 N/A
lv01 1 1 00..01..00..00..00 N/A
bmclv 16 16 00..16..00..00..00 /usr/bmc
lg_dumplv 8 8 00..08..00..00..00 N/A
loglv02 1 1 00..01..00..00..00 N/A
archivelv2 216 216 109..66..00..00..41 /archive2

# lspv -l hdisk0
hdisk0:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd5 1 1 01..00..00..00..00 N/A
hd6 16 16 00..16..00..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 2 2 00..00..02..00..00 /
hd2 32 32 00..00..32..00..00 /usr
hd9var 8 8 00..00..08..00..00 /var
hd3 12 12 00..00..12..00..00 /tmp
hd1 1 1 00..00..01..00..00 /home
hd10opt 18 18 00..00..18..00..00 /opt
paging00 16 16 00..00..16..00..00 N/A
lv01 1 1 00..01..00..00..00 N/A
bmclv 16 16 00..16..00..00..00 /usr/bmc
loglv02 1 1 00..01..00..00..00 N/A
tmp_oracle_lv 40 40 00..40..00..00..00 /dba_tempfs
archivelv2 216 216 00..00..19..109..88 /archive2
#

----------------------------------------------------------
These are the details of the Resource Groups (consdb1 and consdb2) on clusdb1 and clusdb2 respectively.

Resource Group information of node clusdb1
---------------------------
Resource Group Name consdb1
Participating Node Name(s) consdb1 consdb2
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship ignore
Dynamic Node Priority
Service IP Label consdb1
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups tmsvg busvg web2vg apxappsvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
ERCMF Replicated Resources
SVC PPRC Replicated Resources
Connections Services
Fast Connect Services
Shared Tape Resources
Application Servers consdb1_app_server
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false

Resource Group Name consdb2
Participating Node Name(s) consdb2 consdb1
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship ignore
Dynamic Node Priority
Service IP Label consdb2
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups apexbinvg frsbinvg forumbinvg apexvg forumvg frsvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
ERCMF Replicated Resources
SVC PPRC Replicated Resources
Connections Services
Fast Connect Services
Shared Tape Resources
Application Servers consdb2_app_server
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false
Run Time Parameters:

Node Name consdb1
Debug Level high
Format for hacmp.out Standard
--------------------------------------------------------------------
Resource Group information of node clusdb2
---------------------------------------------------------------------
Resource Group Name consdb1
Participating Node Name(s) consdb1 consdb2
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship ignore
Dynamic Node Priority
Service IP Label consdb1
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups tmsvg busvg web2vg apxappsvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
ERCMF Replicated Resources
SVC PPRC Replicated Resources
Connections Services
Fast Connect Services
Shared Tape Resources
Application Servers consdb1_app_server
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false

Resource Group Name consdb2
Participating Node Name(s) consdb2 consdb1
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship ignore
Dynamic Node Priority
Service IP Label consdb2
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups apexbinvg frsbinvg forumbinvg apexvg forumvg frsvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
ERCMF Replicated Resources
SVC PPRC Replicated Resources
Connections Services
Fast Connect Services
Shared Tape Resources
Application Servers consdb2_app_server
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false
Run Time Parameters:

Node Name consdb2
Debug Level high
Format for hacmp.out Standard
 
First of all, I see that /archive2 is on rootvg on clusdb2.
This filesystem needs to be on a separate VG, on a shared volume.

You first need to solve this before going forward.

What you ask here is not easy to answer, because there are a lot of things to be aware of.

The easiest way to do what you need is to create /archive and /archive2 on existent shared VGs.
If you have enough space you could define /archive in tmsvg, busvg, web2vg or apxappsvg (VGs that are already in consdb1 resource group) and also /archive2 in apexbinvg, frsbinvg, forumbinvg, apexvg, forumvg or frsvg (VGs that are already in consdb2 resource group).

The easiest way would be to do that from the smit hacmp --> C-SPOC menus, so both nodes will be aware of the changes.

Anyway, what you're asking is not easy, and should be done by a certified AIX professional.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top