Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

NSR clients within suncluster 3.0

Status
Not open for further replies.

lill

MIS
Dec 17, 2002
5
0
0
DE
Dear All,

I had a problem to take a backup of NSR Clients within SunCluster 3.0.

Environment
2 Sun machines running Solaris 8 clustered with Suncluster 3.0.
Two "virtual" machines were created by the customer through the Suncluster.
One "virtual" machine is owned by one physical node and the
second "virtual" machine by the other physical node. Legato NSR 6.1.1
installed on the 2 physical nodes (I do not install NSR 6.1.3 because the
other SAN nodes & clients run 6.1.1 and in order to be consistent with the
NSR versions, I installed NSR 6.1.1)
FSC NSR Server 6.1A00 + bugfix till 005 running on a Primepower Solaris 8

Configuration
At the NSR level, I created 4 clients: 2 for the physical nodes and 2 for
the "virtual" machines.

Goal
Backup of the FS of the 2 physical nodes (mailsrv & calsrv) of the cluster
and also the FS of the 2 virtual machines (mail-logical & cal-logical).


Problems
When I run savegrp -pv -c &quot;virtual client&quot; <groupname>:

1) I received an error message specifying that the filesystem belongs to
the physical machine and not the &quot;virtual&quot; machine :
savefs: path /global/opt/mail by default belongs to client mailsrv and NOT
client mail-logical!
savefs: Searching for NetWorker bin 'pathownerignore' file.
savefs: Default client index for scheduled save will be that of mailsrv.
savefs: path /global/data/mail2 by default belongs to client mailsrv and
NOT client mail-logical!
savefs: Default client index for scheduled save will be that of mailsrv.

2) I then specified as save set: All but I also got an error message (see
below):
* mail-logical:All rcmd mail-logical, user root: `savefs -s mrbcsan -c mail-
logical -g MAILSRV -p -l full -R -v'
* mail-logical:All nsrexec: authtype nsrexec
* mail-logical:All savefs: nothing to save
savefs mail-logical: failed.



3) I then created pathowneignore file in /usr/sbin directory (the directory
where savefs command resides on) but still error message:
savefs: path /global/data/mail1 by default belongs to client mailsrv and
NOT client mail-logical!
savefs: Searching for NetWorker bin 'pathownerignore' file.
savefs: Detected ownership override file /usr/sbin/pathownerignore.
savefs: Default client index for scheduled save will be that of mailsrv.


Information
Please fing below the /etc/vfstab on both physical nodes:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at
boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes
-
fd - /dev/fd fd - no -
/proc - /proc proc - no -
#/dev/dsk/c3t0d0s1 - - swap - no -
/dev/md/dsk/d43 - - swap - no -
/dev/md/dsk/d40 /dev/md/rdsk/d40 / ufs 1 no
logging
#/dev/dsk/c3t0d0s4 /dev/rdsk/c3t0d0s4 /globaldevices ufs 2
yes -
#/dev/dsk/c3t0d0s6 /dev/rdsk/c3t0d0s6 /export/home ufs 2
yes -
/dev/md/dsk/d46 /dev/md/rdsk/d46 /export/home ufs 2 yes
logging
swap - /tmp tmpfs - yes -
#/dev/did/dsk/d26s4 /dev/did/rdsk/d26s4 /global/.devices/node@1 ufs 2 no
global
/dev/md/dsk/d49 /dev/md/rdsk/d49 /global/.devices/node@1 ufs 2
no global
/dev/md/mailds/dsk/d64 /dev/md/mailds/rdsk/d64 /global/opt/mail ufs
2 yes global,logging
/dev/md/mailds/dsk/d67 /dev/md/mailds/rdsk/d67 /global/data/mail1 ufs
2 yes global,logging
/dev/md/mailds/dsk/d70 /dev/md/mailds/rdsk/d70 /global/data/mail2 ufs
2 yes global,logging
/dev/md/calds/dsk/d73 /dev/md/calds/rdsk/d73 /global/opt/cal ufs 2
yes global,logging
/dev/md/calds/dsk/d76 /dev/md/calds/rdsk/d76 /global/data/cal ufs
2 yes global,logging
/dev/md/ldapds/dsk/d79 /dev/md/ldapds/rdsk/d79 /global/opt/ldap ufs
2 yes global,logging
/dev/md/ldapds/dsk/d82 /dev/md/ldapds/rdsk/d82 /global/data/ldap ufs
2 yes global,logging


Note that both physical nodes sees the same filesystems. They also sees
those of the &quot;virtual&quot; machines.

Output of the scstat command (this command reports sun cluster statistics):

-- Cluster Nodes --

Node name Status
--------- ------
Cluster node: mailsrv Online
Cluster node: calsrv Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status
-------- -------- ------
Transport path: mailsrv:eri0 calsrv:eri0 Path online
Transport path: mailsrv:qfe3 calsrv:qfe3 Path online

------------------------------------------------------------------

-- Quorum Summary --

Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3


-- Quorum Votes by Node --

Node Name Present Possible Status
--------- ------- -------- ------
Node votes: mailsrv 1 1 Online
Node votes: calsrv 1 1 Online


-- Quorum Votes by Device --

Device Name Present Possible Status
----------- ------- -------- ------
Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group Primary Secondary
------------ ------- ---------
Device group servers: mailds mailsrv calsrv
Device group servers: calds calsrv mailsrv
Device group servers: ldapds calsrv mailsrv
Device group servers: rmt/1 - -
Device group servers: rmt/2 - -


-- Device Group Status --

Device Group Status
------------ ------
Device group status: mailds Online
Device group status: calds Online
Device group status: ldapds Online
Device group status: rmt/1 Offline
Device group status: rmt/2 Offline

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name Resources
---------- ---------
Resources: mail-rg mailip-res maildisks-res ldap-server-res
mail-server-res
Resources: cal-rg calip-res caldisks-res ics-server-res
Resources: ldap-rg ldapip-res ldapdisks-res


-- Resource Groups --

Group Name Node Name State
---------- --------- -----
Group: mail-rg mailsrv Online
Group: mail-rg calsrv Offline

Group: cal-rg calsrv Online
Group: cal-rg mailsrv Offline

Group: ldap-rg calsrv Online
Group: ldap-rg mailsrv Offline


-- Resources --

Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: mailip-res mailsrv Online Online -
LogicalHostname online.
Resource: mailip-res calsrv Offline Offline -
LogicalHostname offline.

Resource: maildisks-res mailsrv Online Online
Resource: maildisks-res calsrv Offline Offline

Resource: ldap-server-res mailsrv Online Online -
Service is online.
Resource: ldap-server-res calsrv Offline Offline -
Successfully stopped Netscape Directory Server.

Resource: mail-server-res mailsrv Online Online -
Service started successfully
Resource: mail-server-res calsrv Offline Offline -
Stop Succeeded

Resource: calip-res calsrv Online Online -
LogicalHostname online.
Resource: calip-res mailsrv Offline Offline

Resource: caldisks-res calsrv Online Online
Resource: caldisks-res mailsrv Offline Offline

Resource: ics-server-res calsrv Online Degraded -
Service is degraded.
Resource: ics-server-res mailsrv Offline Offline

Resource: ldapip-res calsrv Online Online -
LogicalHostname online.
Resource: ldapip-res mailsrv Offline Offline

Resource: ldapdisks-res calsrv Online Online
Resource: ldapdisks-res mailsrv Offline Offline

------------------------------------------------------------------


The NetWorker application is not cluster-aware (not managed by Suncluster
3.0)

Note that the customer does not have DNS. So, I had to specify the IP
address and hostname of the physical & virtual machines in the /etc/hosts
of the physical nodes of the SunCluster and also on the NSR Server.


How do I have to properly configure the NSR clients of the Suncluster 3.0 ?

Thanks.
Regards,

Lillo
 
i'm having a similar problem. I have 2 physicals servers and 2 virtual. The problem with me is the shared file systems are mounted globally. When the physical backup starts with a saveset of all he backs up everything. The downside is im backing up the global file systems twice. Once for each physical. Now what i wanted to do was run a saveset of All on the physical with a Unix directive to skip the global file system. I was than going to create a virtual client name and specify the global file systems one by one(this can be tedious if there are alot). The problem is the indexes are saved under the physical name so every time the virtual client start his backup he runs a full. You could try this scenerio. You would be covered in the event they add a new global file system the physical client backup would cover it since the directive is only skipping what you originally told it too. If you have any ideas of your own on how to best go about this i would appreciate that.

Ed Coty
 
I found the solution to my problem !!
Please see below.

Environment
· Two Solaris 8 physical nodes (mailsrv & calsrv) clustered with SunCluster 3.0
· Each physical node owns one application (mail for mailsrv and calendar for calsrv)
· Each physical node is a SAN Node. It detects 4 LTO drives inside a Scalar 100 robot

Documents
· Read the Legato NetWorker Rel. 6.1.x, UNIX version, Release Supplement Chapter 3
· In Chapter 3, go to “Installing Only NetWorker Client Software in a Cluster” p. 85

Problems/Remarks
In task 2:Configure NetWorker Client Software as High Available

Define the physical & logical machines in the /etc/hosts of both physical nodes of the cluster and on in /etc/hosts/ of mrbcsan (NSR Server) as well.

The networker.cluster reported errors when using ‘set –x’:
&quot;ERROR: can not find package LGTOserv&quot;
&quot;test: missing argument ...&quot;

Modification of the networker.cluster script by replacing in some lines LGTOserv by LGTOclnt.

After those modifications, the LGTO.clnt resource type was defined in the Sun cluster.
Check with the command ‘scrgadm –pv’ or in the SunPlexter web tool or in the /var/adm/messages file

In task 3: Create instances of the Client Resource Type
We must execute on one node, the ‘scrgadm –a –j <resource name>…’ command for each virtual machine:
scrgadm –a –j lgto-mail-res –g mail-rg –t LGTO.clnt –x clientname=mail-logical –x owned_paths=/global/data/mail1,/global/data/mail2,/global/opt/mail

where lgto-mail-res is a newly created resource within the SunCluster

scrgadm –a –j lgto-cal-res –g cal-rg –t LGTO.clnt –x clientname=cal-logical –x owned_paths=/global/data/cal,/global/opt/cal

where lgto-cal-res is a newly created resource within the SunCluster
From the SunPlex Manager launched via a Web Browser:
in Actions Menu: Choose Enable Resource (this action executes the command ‘scswitch –e –j lgto-mail-res’)

Control the output of the command ‘/usr/sbin/lcmap’ which should give you something like:

root@calsrv # lcmap
type: NSR_CLU_TYPE;
clu_type: NSR_LC_TYPE;
interface version: 1.0;

type: NSR_CLU_VIRTHOST;
hostname: mailsrv;
owned paths: /global/.devices/node@1;

type: NSR_CLU_VIRTHOST;
hostname: calsrv;
owned paths: /global/.devices/node@2;

type: NSR_CLU_VIRTHOST;
hostname: cal-logical;
owned paths: /global/data/cal, /global/opt/cal;

type: NSR_CLU_VIRTHOST;
hostname: mail-logical;
owned paths: /global/data/mail1, /global/data/mail2, /global/opt/mail;

Create NSR Clients for the physical & virtual machines. You can put ‘All’ as save set for all NSR Clients.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top