Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

NetBackup tuning question

Status
Not open for further replies.

wbedour

Technical User
Jul 30, 2003
2
US
I am running NetBackup FP 4.5 on HP-UX 11.i, master server is on an HP L2000, and this media server is on an HP N4000 with a SureStore 20/700 library using LTO-1 drives. I created a policy that is doing a full backup, with cross mount points checked and limit jobs per policy to 4. Each mount point has about 175gb of data to be backed up. The policy kicks off 4 jobs as it should, but is taking 10 to 12 hours to complete.
I looked in my bptm logs and am seeing the following:
02:23:48.705 [24712] <2> write_data: waited for full buffer 2911 times, delayed 5980 times
02:24:17.000 [24714] <2> write_data: waited for full buffer 4019 times, delayed 4528 times

03:11:57.107 [26595] <2> write_data: waited for full buffer 1237 times, delayed 1487 times

04:03:31.939 [26594] <2> write_data: waited for full buffer 26659 times, delayed 72414 times

04:35:13.959 [4638] <2> write_data: waited for full buffer 33216 times, delayed 37370 times

04:57:24.506 [11931] <2> write_data: waited for full buffer 23206 times, delayed 25822 times

05:00:51.795 [26600] <2> write_data: waited for full buffer 114308 times, delayed 131449 times

05:06:11.186 [26608] <2> write_data: waited for full buffer 117920 times, delayed 138141 times

05:09:48.521 [25250] <2> fill_buffer: [25213] socket is closed, waited for empty buffer 186 times, delayed 188 times, read 52887552 bytes

05:09:48.535 [25213] <2> write_data: waited for full buffer 204 times, delayed 365 times

05:11:20.869 [25424] <2> fill_buffer: [25418] socket is closed, waited for empty buffer 1049 times, delayed 1143 times, read 300810240 bytes

05:11:20.882 [25418] <2> write_data: waited for full buffer 1161 times, delayed 1530 times

05:14:13.594 [1398] <2> fill_buffer: [1292] socket is closed, waited for empty buffer 0 times, delayed 0 times, read 144703488 bytes

05:14:13.612 [1292] <2> write_data: waited for full buffer 2782 times, delayed 3121 times

Do the above waits seem out of line? If so any suggestions on tuning this beast?

Thanks in advance

 
The numbers look okay - I don't think you have anything to worry about there. Check out my FAQ - faq776-3124 for all buffer settings etc.

Are all your tape drives (All 20) connected to one media server? As a rule I never connect more than two LTO drives to any one media server - Depending on your architecture, Fibre etc? The NIC's - What make/model? What are your NIC settings and the Switch for the port s for all servers in question?

i.e. Nic's (If not using Fibre or Gigabit) should be manually set for 100/full at both nic and switch levels.
 
We currently have only 8 tape drives in this library, all connected to the same media server. That is why I have the &quot;Limit Jobs per Policy&quot; parameter set at 4. The drives are scsi attached and I'm 99% sure we are running at 100/full.
We are currently not using multiplexing and don't plan on it if possible.
I checked for NET_BUFFER_SZ, NUMBER_DATA_BUFFERS & SIZE_DATA_BUFFERS and none exist so the NetBackup default is set for all these. Any suggestions on the initial setting of these to use as a base and then tune from there? Also, are these parameters dynamic or does NetBackup need to be cycled for the changes to take affect?
thanks
 
Use the FAQ I provided - The settings there all work on HP UX 11.0 as we have a couple of media servers in the same environment. Part of the performance is your configuration - There is no possible way you can drive all the drives from one media server with a 100/full connection.

LTO is rated at 53GB/106GB per hour
100Mb/sec is 44GB/hour (IEEE 802.3u)
Even with only 4 drives running you could potentially write in excess of 200GB/hour - Your setup is not condusive to this. The HP recommendation is two LTO drives per server max and then only at a dual 100/mb nic or higher such as fibre or ether GB.

Seriously though - Reconsider your hardware layout.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top