Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

aborted due to inactivity 4

Status
Not open for further replies.

hkc

Technical User
Oct 22, 2000
6
AU
Environment:
Networker V5.5.2 Build 165 and NT40 SP6

The following unsuccessful messages drive me crazy from time to time during the backup as it is not always happening.

nsrexec: Attempting a kill on remote save
aborted due to inactivity

The client was running ok during the backup and no error found in the event log.
Does that error message mean that the client was hanging during the backup or too busy to response to the Networker?
The current value of Client retries is "1" and Inactivity timeout "30".
But according to the daemon log file it actually tried twice.
If anyone has already come across this problem please let me know how I can get rid of.

Thanks in advance
hkc




 
We have the same problem with Networker 5.7.
The problem only occurs on clients with multiple interfaces.
A soon a the backup time exceeds the configured inactivity timeout, the save process is killed.
It seems to be a bug in Networker, but our reseller hasn't been able to confirm this yet.
For the moment whe have set the inactivity timeout parameter to zero as a workaround.
You may want to take a look at the following URL:

Gertjan Idema
 
I habe done the same thing -- set inactivity to 0 and my previously failed operations are successful
 
Any time I ever received that error, it was due to misconfigured network settings. Make certain that the network cards, primary and secondary, are both set to the same speed / duplexing as the switch they run to. For example, NW Server is set to 100Mbps / Full Duplex, NW Client should be set to 100Mbps / Full Duplex and the switch should be set to 100Mbps / Full Duplex. This should help. Also, verify client Alias' field and ensure all entries are there, properly configured FQDN, use the rpcinfo -p clientname command to verify communication, make sure that nslookup returns a reverse ping. As soon as I corrected all of these issues, my errors disappeard. Setting the timeout to 0 is effective, but may allow for overlooking other problems that might exist.

Good luck,

CF
 
I am getting this error, but cannot find a "timeout" setting on the Windows NT version. Is this option available under Windows NT?

Tnx!

Little O
 
All,

Thanks for your reply.
We have been using "Auto" for Media Speed and Duplex Setting
on Server and Client.
The Alias field had the hostname and correct FQDN as well.
The nslookup also returns ok.
I'll change the Interactive Timeout (Edit under a group) value to 0 for a while and see how it goes.
 
re "Auto" for the mode/frame-type and network speed. BEWARE - we had several servers configured to auto detect but our switches (unknown to us at the time) didn't support auto sensing and the NICs defaulted to Half-Duplex. This caused the backups to run very slowly. This was fixed by forcing the NICs to Full-Duplex/100Mbps.
 
I have had the same problem and as I see it it´s not a bug but a configuration issue.

I have been backing up several clients simultaniously, and the parallellism for the backupserver was set to default (4).
In my case client-1 had 3 filesystems and klient-2 had 4 filesystems, therefore the rest of the clients didn´t get a save-session until some of the filesystems of klient-1 & 2 where done. If the time b(efore any save-sessions is available) exceeds the timeout-value, then you get the error you asked about(aborted due to inactivity).

You can adress this problem by increasing the timeout so the clients can "idle" while other backupclients has got the save-sessions available.
You can allso increase the parallellism to allow more clients to be backed up simultaniously.
You can allso try to use the priority settings for each client to help avoid inactivity problems. (set higher priority to slow systems with small amounts of data).

Of course all of the parameters has performance issues to concider. For instance: if you set the parallellism to high your system will try to backup to many filesystems at the same time and the performance will suffer.
//B
 
This usually works for this problem:
Go to the HOSTS file on the Legato server and create entries with the IP, netbios name, and FQDN for every legato client. On the Legato clients, edit the HOSTS files to contain this info for the Legato server. So, if you have 10 clients, your Legato server's HOSTS file will have 10 entries and each client will have 1 entry (server). Someone said that you need to stop and restart the Legato services after you do this. Probably not a bad idea.

If this doesn't help, look at your network settings.
 
The only problem with having your inactivity timeout set to 0 is that this group never times out so if a group is using a pool that happens to be low and eventually runs out of tapes then it will keep hogging the drive until the group is manually stopped and other groups will fail.


We have 11 drives with approx 400 jobs running every day and so if one job starts hogging a drive like that it can lead to all sorts of fun.
 
I have this problem with our Legato backup. Everything was working fine then I added a few more clients and now nothing works! Not one client. Any advice?
 
This problem occured on our Storagetek whereby we had four drives being controlled by one SCSI Controller over a poor network line. We added another SCSI controller so each controlled two drives and the problem was resolved.
 
My Legato 6.1 runs just fine for a while, writing 15Mb/s on the tape. Then it almost stops, writing just 50-100kb/s for an our, then I get 15Mb/s for 5 minutes, and then back to the poor 50-100kb/s... I do not find anything strange in the logs. Could it be a bad tape? How can I se if there is a lot of re-write atempts on a tape? It's not a network problem, not client problem since it happes even if I have a lot of parallel datastreams. Please help!
 
We've noticed the slowdown as well. I believe it occurs when a directory with a large number of files is being backed up.
 
The variation in write speed was explained to me about a month ago on this forum. Basically if the files are big you will get good through put times. If it's lots of small files the speed will drop to kb/ps due to continuously having to write to its buffer and oraganise positioning.

You could try experimenting with parellelism to speed things up but apart from not backing up small files there's no work around.
 
Do not forget that this effect may also occur if you run slow backup devices. I had the problem doing local backups to a DLT4000 or a DAT drive, both running at about 1.3 to
1.5 MB/s. However, if i use a DLT7000 or a file device, all runs fine.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top