Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Search results for query: *

  1. julianbarnett

    Jukebox drives dedicated to a storage node

    Thanks for those replies - you are right I need jukebox sharing. For flexibility we have zoned all drive paths to both the servers and storage nodes but then configured Networker to use the desired path via the rmt devices. In other words with 2 HBAs per server, Networker inquire can see 20...
  2. julianbarnett

    Jukebox drives dedicated to a storage node

    Thanks for that. I have configured the last 2 drives in the jukebox solely with AIX device paths - will see how it runs tonight. I think the DDS option confused me - I thought it was the only way a libraray could be shared between server and storage node.
  3. julianbarnett

    Jukebox drives dedicated to a storage node

    Hi - we have a SAN connected Scalar I2K with 10 LTO4 drives attached to a CPU bound Solaris Networker server. I tried using DDS to share out some of the drives to a SAN connected AIX storage Node and alleviate the load on the Solaris server. However, using this configuration, if the server is...
  4. julianbarnett

    disable bootstrap print Legato

    The bootstrap records are saved to tape - the notification just tells you where the most recent bootstraps are - tape and file position on tape. Imagine your networker server has a disk crash and you lose the entire /nsr partition. In order to recover you need to reinstall the networker software...
  5. julianbarnett

    disable bootstrap print Legato

    I believe you just need to change the Action in Bootstrap Notification - this info is important for Disaster Recovery so you may prefer to email it - on Unix we use the Action -/usr/ucb/mail -s "Bootstrap for <backup_server>" <mail.address@your.domain>.
  6. julianbarnett

    Savegrp command with Save Sets

    savegrp -R <group_name> This option is used to restart a group that was stopped or if savesets failed and they need to be retried. The restart window attribute of the group is used to determine if it is too late to be restarted. If the window has elapsed the restart is converted into fresh start.
  7. julianbarnett

    Can't see details of the client under info tab

    You will see this behaviour if the client in question is a Windows Cluster virtual resource - it uses the client resource of the underlying physical server. We have lots of these - e.g. db204a (physical server A), db204b (physical server B) and db204v (Virtual IP resource with shared disks). The...
  8. julianbarnett

    With mminfo how do I find the flags &quot;full' and &quot;recycleable&quot;

    oops - missed final ' ! mminfo -m -q "pool=<POOL>,full,!volrecycle" | grep -v volume | awk '{ print $1 }'
  9. julianbarnett

    Savegrp command with Save Sets

    The Group Restart function in NMC is supposed to address this - as long as the group restart window has not been exceeded, only the failed saveset will be retried. We have updated all our daily groups to have a restart window of 23:59 to allow reruns to be perfromed up to 24 hours after initial...
  10. julianbarnett

    With mminfo how do I find the flags &quot;full' and &quot;recycleable&quot;

    mminfo -m -q "pool=<POOL>,full,!volrecycle" | grep -v volume | awk '{ print $1 } Should do the trick
  11. julianbarnett

    TEST SCSI SPEED

    I would apply patch, power cycle L9 and reboot system - should clear the error and reset SCSI channel to Fast/Wide. If it still runs slowly and comes up with SCSI errors get then it's time to get the L9 and SCSI card looked at.
  12. julianbarnett

    ufsdump and large files

    Wonder if this is ufsdump or the filsystem causing the problem, can you make a large file in the target partition? e.g. mkfile -v 4000m /devbackup/largefile.
  13. julianbarnett

    ufsdump and large files

    It's not volume manager - it's the filesystem you have mounted , the mount command will show largefiles against filesystems that are largefile enabled - I believe the standard ufs mkfs and mount options assume applications are largefile aware. Veritas I think assumes nolargefiles by default and...
  14. julianbarnett

    TEST SCSI SPEED

    P.S. What version of Solaris are you running? - check to see if you have the latest glm driver patch applied. This is 109885-16 for Solaris 8 - 109885-12 has some fixes for phase parity problems. Regards Julian
  15. julianbarnett

    TEST SCSI SPEED

    OK so all drives are Fibre attached and the only thing on the SCSI channel is the L9. From the glm driver man page.. Target <id> reverting to async. mode A second data transfer hang was detected for this target. The driver attempts to eliminate this problem by reducing the data transfer...
  16. julianbarnett

    TEST SCSI SPEED

    Can you post an ls -l of /dev/dsk/c1t1d0s0, /dev/dsk/c5t0d0s0 and /dev/rmt/0? Regards Julian
  17. julianbarnett

    Solaris FTP error - Broken pipe

    Couple of obvious questions - is there sufficient space on the server to accommodate the file, and are there any other processes using this file at the same time. I have seen this where an automated process is picking up a file and renaming it before the file transfer has completed. Finally is...
  18. julianbarnett

    TEST SCSI SPEED

    What devices are on /pci@8,700000/scsi@5,1? Try 'format </dev/null' to see if they are disks otherwise this is the tape drive channel. Have you added any SCSI devices recently? The error messages indicate that target 1 LUN 0 on the above SCSI channel has caused it to degrade from Fast/Wide...
  19. julianbarnett

    JNIC v5.3.0.1

    Maybe the driver has changed name - was the previous driver reporting JNI,FCE? which was already in the lus.conf file. I know our &quot;supported&quot; HBA cards LP9002S did not have an entry for their driver name (lpfs) in the lus.conf and until I added it and reloaded the lus module we saw...
  20. julianbarnett

    Multiple Save Groups vs One Large Save Group

    Do you run all four groups at once, or stagger them, and do they all write to the same tape pool? May be that as a single group client savesets were not started but queued up until previous ones finished. Now you may find the client save starts, has no resource available (max concurrency reached...

Part and Inventory Search

Back
Top