Thanks for those replies - you are right I need jukebox sharing. For flexibility we have zoned all drive paths to both the servers and storage nodes but then configured Networker to use the desired path via the rmt devices. In other words with 2 HBAs per server, Networker inquire can see 20...
Thanks for that.
I have configured the last 2 drives in the jukebox solely with AIX device paths - will see how it runs tonight. I think the DDS option confused me - I thought it was the only way a libraray could be shared between server and storage node.
Hi - we have a SAN connected Scalar I2K with 10 LTO4 drives attached to a CPU bound Solaris Networker server.
I tried using DDS to share out some of the drives to a SAN connected AIX storage Node and alleviate the load on the Solaris server. However, using this configuration, if the server is...
The bootstrap records are saved to tape - the notification just tells you where the most recent bootstraps are - tape and file position on tape. Imagine your networker server has a disk crash and you lose the entire /nsr partition. In order to recover you need to reinstall the networker software...
I believe you just need to change the Action in Bootstrap Notification - this info is important for Disaster Recovery so you may prefer to email it - on Unix we use the Action -/usr/ucb/mail -s "Bootstrap for <backup_server>" <mail.address@your.domain>.
savegrp -R <group_name>
This option is used to restart a group that was stopped or if savesets failed and they need to be retried. The restart window attribute of the group is used to determine if it is too late to be restarted. If the window has elapsed the restart is converted into fresh start.
You will see this behaviour if the client in question is a Windows Cluster virtual resource - it uses the client resource of the underlying physical server. We have lots of these - e.g. db204a (physical server A), db204b (physical server B) and db204v (Virtual IP resource with shared disks). The...
The Group Restart function in NMC is supposed to address this - as long as the group restart window has not been exceeded, only the failed saveset will be retried.
We have updated all our daily groups to have a restart window of 23:59 to allow reruns to be perfromed up to 24 hours after initial...
I would apply patch, power cycle L9 and reboot system - should clear the error and reset SCSI channel to Fast/Wide. If it still runs slowly and comes up with SCSI errors get then it's time to get the L9 and SCSI card looked at.
Wonder if this is ufsdump or the filsystem causing the problem, can you make a large file in the target partition?
e.g. mkfile -v 4000m /devbackup/largefile.
It's not volume manager - it's the filesystem you have mounted , the mount command will show largefiles against filesystems that are largefile enabled - I believe the standard ufs mkfs and mount options assume applications are largefile aware. Veritas I think assumes nolargefiles by default and...
P.S. What version of Solaris are you running? - check to see if you have the latest glm driver patch applied. This is 109885-16 for Solaris 8 - 109885-12 has some fixes for phase parity problems.
Regards Julian
OK so all drives are Fibre attached and the only thing on the SCSI channel is the L9.
From the glm driver man page..
Target <id> reverting to async. mode
A second data transfer hang was detected for this target. The driver attempts to eliminate this problem by reducing the data transfer...
Couple of obvious questions - is there sufficient space on the server to accommodate the file, and are there any other processes using this file at the same time. I have seen this where an automated process is picking up a file and renaming it before the file transfer has completed. Finally is...
What devices are on /pci@8,700000/scsi@5,1?
Try 'format </dev/null' to see if they are disks otherwise this is the tape drive channel.
Have you added any SCSI devices recently?
The error messages indicate that target 1 LUN 0 on the above SCSI channel has caused it to degrade from Fast/Wide...
Maybe the driver has changed name - was the previous driver reporting JNI,FCE? which was already in the lus.conf file. I know our "supported" HBA cards LP9002S did not have an entry for their driver name (lpfs) in the lus.conf and until I added it and reloaded the lus module we saw...
Do you run all four groups at once, or stagger them, and do they all write to the same tape pool? May be that as a single group client savesets were not started but queued up until previous ones finished. Now you may find the client save starts, has no resource available (max concurrency reached...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.