Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations John Tel on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Server backup times very slow (mB/min etc) Ntbackup and BackupExec;

Status
Not open for further replies.

markm75

IS-IT--Management
Oct 12, 2006
187
US
I have two different servers.. one runs Windows x86 Server 2003, the other x64 Server 2003.

In both cases, I am backing up data off of a RAID 5 3 disk array (400gb) to another internal drive call it (E, IntBackup).

The times with NTbackup and BackupExec are very similar.

(x86)ServerA with Sata150 drives is backing up 292gb of data at the rate of 438 MB/min in 11hrs 24 mins (This is only about 7.5 MB/s!!) -Ntbackup

(x64)ServerB with Sata300 drives is backing up 308gb of data in 15hr 9 mins or 347 MB/min (even less MB/s) with BackupExec, set to Hardware compression, otherwise use software.

Side note: Ultimately both servers will be sending their data across Gigabit eithernet to do the backups to a central backup server, but my tests so far doing this test (as well) have only yielded about 400 MB/min across the wire too.

Of the data that is being backed up, includes: SQL server instances (and in another server, serverC, will have Exchange data as well).. hence the preference for NTbackup and BackupExec.

Are these backup times typical with this software.. is there an alternative software that can do the data backup as well (netbackup?).. its critical though, in the future, that on the other server, Exchange data and mailboxes and transaction logs can be backed up too though.

Is there a way to increase the speeds.. I believe I have tried turning off compression, but the speed increase (i think) was minimal..

Thanks for any tips
 
Here is how I configured Veritas11d (and the results):

d to e (single bkf) high priority, full, reset archive bit, no compression, using VSP snapshot

After 4 min 19sec 2100 MB/sec, 8 678.43628 MB, or 33.51 MB/sec
After 8m 40s: 2000 MB/sec , 16 251.564 MB or 32.5 MB/sec

1364 MB/min after 26 mins (36,634,000,000 bytes)
602 MB/min after 4h 32mins (172,100,000,000 bytes)

After 51min : 1080 MB/min and 57,600,000,000 bytes

As you can see, as time went on things slowed down considerably.
 
My current take:

It seems logical that it is BE issues (alot of people are complaining of similar issues)... When installing BE.. I believe it asks if you want to install BE drivers or use the built in ones? Which option is most people using.. or at least, those who actually get good throughput? I cant recall which I chose.. I'll have to try reinstalling.

I am also trying the demo for ShadowProtect (doesnt work on x64 servers though).. On our x86 server.. it backed up a 332gb partion in 4hr 7 mins (22 MB/sec) and shrunk it to 242 GB with normal compression. Unfortunately this program doesnt let you select which files/folders to backup, only the whole partition, but the compression is very good. In BE.. with hardware compression, I never saw any difference in the size backed up and the backup file size.

I also tried Acronis 9.0 and did the whole partition (at home I tried files and folders and only got around 7.3 MB/sec).. It did the 332GB partition in 4hr 41mins or 20.19 MB/sec with a final size of 242GB with normal priority and compression.

Today I will install MS DPM and see if I can get close to that 1 hr timeframe..

Its interesting, but DPN is about 1GB (setup files) whereas ShadowProtect and Acronis are only 9mb+ (i think acronis is somewhat bigger but not 1GB).. I really dont care if DPM will do it in 1 hour

If this ends up being the case, I will probably forget about BE and use DPM (if it supports remote agents etc). I'm not sure if DPM will do tapes, so I may be stuck with BE? (havent researched DPM just yet).

 
It seems logical that it is BE issues (alot of people are complaining of similar issues)... When installing BE.. I believe it asks if you want to install BE drivers or use the built in ones? Which option is most people using.. or at least, those who actually get good throughput? I cant recall which I chose.. I'll have to try reinstalling.



Always use the BE drivers

Paul

MCSE 2003

"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."
Albert Einstein
 
It sounds like you are backing up to a single hard disk rather than a tape drive, is that correct? If so, there are a couple of things to keep in mind about backing up to disk.

1. Windows file access isn't particularly fast. Backing up from a Windows filesystem to another Windows filesystem rarely gets you stellar performance. This is because it is optimized for non-sequential access, has security built in, etc. Most 3rd-party backup to disk solutions use a proprietary filesystem that is much faster to write to.

2. If there is existing data on the target hard disk, it will slow down writes. This is because backups write data out in a linear stream, whereas most hard disks have data scattered all around the disk. If the backup encounters clusters that are in use it will have to skip ahead to the next free cluster. Furthermore, the read and write speeds will vary across the surface of the disk. If the outer tracks are full then you'll be putting your backups on the inner tracks, which have much lower effective transfer rates.

3. If you are backing up 300 GB of small files, it will take far longer than backing up 100 3GB files. What I'm saying here is that average file size makes a difference to filesystem performance, with smaller files taking longer.

4. You can read from a disk faster than you can write to the same disk. This means that your RAID5 array can read data much much much faster than you can write it to a single hard disk. So you'll read a bunch of data from the array while simultaneously writing, then the write buffers/cache will get full, so you'll stop reading while you finish writing, then you have to wait for the disk to come back around to where you left off before you start reading again. And this start-stop-start process is happening constantly.

5. Keep in mind that the archive bit on the files that you are backing up needs to get reset when the file is backed up. This means that a write needs to occur on your RAID5 array. Write access is not RAID5's strong suit, because a parity data has to be calculated. This involves multiple read operations, some math calculations, and write operation. This takes time. Is your RAID array using hardware or software RAID? If it's using software RAID then it will be even slower.
 
I get very good read/write test results when I test using PerformanceTester and other tools..

I have seen the whole multiple small files being much slower to copy than big ones.

But many people are reporting the same problem with BE.. What good is a backup program if it takes 30 hours to backup 600gb of data? To me this seems crazy, hence I'm probably going to dump BE for backups of data to harddisk in favor of the other 2 or 3 programs I mentioned.. IE: ShadowProtect (4hours for 300gb) or DPM (1 hour for 300gb)..

Granted those programs do whole partitions but their compression is very good as well.

I'll probably have to use BE for backing up the data on the harddisk storage server to the LTO3 tape drive (which many people claim at least 22 MB/sec with BE and a tape drive, vs the harddrive of about 600 MB/min max).

 
What good is a backup program if it takes 30 hours to backup 600gb of data? To me this seems crazy, hence I'm probably going to dump BE for backups of data to harddisk in favor of the other 2 or 3 programs I mentioned.. IE: ShadowProtect (4hours for 300gb) or DPM (1 hour for 300gb)..

It's all a matter of setup. For example, the company where I used to work used BackupExec to backup all of their servers to disk. Then after that was completed they would shuffle those disk-based backups to LTO3 tape for offsite storage. It was actually faster to backup to disk than to their multi-drive LTO3 library, because they were using an extremely fast disk array for the disk-based backups (an EMC SAN, actually).

If you want to see whether the bottleneck is the disk or BackupExec, then I recommend putting together a collection of 10GB or so of data of various file sizes, and then doing a regular Windows copy from your RAID5 array to your backup disk. Time it and see how fast it goes. My guess is that the copy will provide roughly the same throughput as your backups were getting. And I'm doubly disinclined to believe that BackupExec is the problem because NTBackup is showing essentially the same problem.
 
Well I did a file copy of a folder of various file sizes.. roughly 4.5 GB in size.. I achieved no more than 10 MB/sec (whereas my write tests with one big file showed about 38 MB/sec).

I'm not familar with EMC san's etc.. what type of HD configuration would I need to achieve the maximum speed (or say at least 1200 MB/min rate)?

I'm in the design phase (prepurchase) on this new backup server.. initially I was shooting for a 2TB raid 5 array (4 or 5 disks).. I always preferred RAID5 in case a drive would die, which over time they tend to do..

Are you suggesting that I use say RAID0 or some other variation? That would make me a bit nervous.. though I guess considering that we will have an LTO3 tape library.. this wouldnt matter (the tape library will backup the backup, not sure on frequency.. I was thinking every week, but the fulls will run either every week or every month on the disk to disk backups).

Also.. Is the only key to making it faster within the backup server itself (where things will get written to), I'm assuming?

Each of the other servers have RAID5 data stripes that will pipe the backup data over gigabit ethernet (roughly 60 MB/sec max rate).

Thanks
 
I should mention that the drive I'm writing the backup files to is just a single SATA 300 drive.. the drive reading data from (same machine) is SATA 300, but hardware Raid5..

So if the answer is having RAID0 on both ends, then this is out, as too many of our servers would have to be redone.

The only faster solution is Acronis, Shadowprotect or DPM I suppose if so.
 
I would avoid RAID5 like the plague for disk-based backup solutions. Because RAID5 is requires extra reads and processing on every write, it's write performance is not the best. Read performance is quite competitive though.

I would recommend a RAID10 (RAID 0+1, 1+0, whatever you want to call it) array for anything that is going to be write intensive. For a given number of disks, only RAID0 provides the capability for higher write performance (at the cost of fault tolerance). With RAID10 you will need an even number of disks, and I would recommend more than just 4 or you will probably not see the greatest improvement. Though if you are only looking to double your current write performance, 4 disks might do it.
 
So this would be how many drives per set then.. your suggesting say 5 on each set (each mirrored raid0?)...?

The server rack mount case I was getting has 8 drive bays.. I was going to mirror the OS (2 drives there alone), that leaves room for 6 more drives..

Does this imply needing 10 more disks to do this configuration or do you mean 5 total drives in the whole raid0+1 configuration?

IE: I want to build a 2 TB array.. and lets say I go for RAID0+1, will my remaining 6 drives that I can fit in the server case be enough?

I wish I had more firm data to suggest that this will really boost that aweful figure of about 380 MB/min with BE to something more like 1200 MB/min at least.. Your saying with just the raid0+1 on the backup server, leaving the raid5 everywhere else (reads), that this will probably be true?

I guess this setup is still data redundant like my raid5 idea, but much faster?

Cheers
 
The other issue with disk to disk backup on the same machine is the controller. If both the source and destination are using the same controller, that will slow things down. With disk to tape, you are typically running off two separate controllers.

We use BackupExec to an LTO2 tape and get 135Gb done (from 4 separate servers) in about 2.5hrs for the backup and that is with a full verify.

R.Sobelman
 
It's going to be hard to say without doing some testing which solution will provide optimal performance. I know for a fact that for a given number of disks a RAID 10/0+1/1+0/whatever you want to call it will give you better write performance than a RAID 5 array made with the same disks. I also know that will give you better write performance than a single disk (which is what you've tested with). Based on that, I think that it should improve your backup performance. However, there are a couple of things that I would consider:

1. What type of data is being backed up? You mentioned SQL and Exchange. Are you doing hot SQL backups? If so, that will slow things down. We usually set up a job within SQL to backup the DBs to disk, along with the logs, and then back those files up instead of the actual databases themselves. It's much faster. Similarly, we don't do mailbox-level backups on Exchange. We only backup the store files and logs.

2. Disk-based backup systems that don't rely on the Windows filesystem tend to be faster than just writing a file out to an NTFS drive. I'm not sure how you would get around that without specialized hardware though.

3. If you are doing backups across the network, use software compression instead of hardware. If you use hardware compression then the compressing of data is done by the tape drive. If you do software compression then the compression is done by the backup agent, which resides on the remote computer, resulting in less data being sent over the network.

I wish I had better answers for you.
 
Just to note again.. my current setup is just in testing.. harddisk to harddisk backups.. from RAID5 SATAII to single drive SATAII (SERVERA below).. Once I build the backup server, it will take data across gigabit ethernet.. assuming a max speed of about 60 MB/sec coming into the new server (if my max incoming is 60, I'm not sure what good it would be to have sequential writes that are faster than 60 MB/s?). This backup server will also backup to tape (LTO3 library) via U320 SCSI seperate card.

Still alittle confused on the old SCSI vs SATA debate.. Arent SATA drives getting independent bandwidth.. IE: if you have a controller card with 4 drives (independent or RAID).. those get their own bandwidth of say 3 gb/s? I've heard that this isnt true? Whereas I used to think that with SCSI drives, on one chain (ribbon), those drives could have say a max of 320 MB/s, but divided amongst each drive on the chain? Is this true as well?

I also chose SATA as the foundation for the new server (though not purchased yet) because I found it required less drives to achieve the desired 2 TB array size. The new server will be a 2U server case with 8 removable bays, so at most I have 7 drives to work with to build this array.

Looking at my stats below.. even my RAID5 array, which the new storage server was going to have for backing up the data, is getting an index of 84 MB/s with seq. writes (most import spec here?) of 105 MB/s while bypassing windows cache. The independent drive on ServerA where I am currently test writing Backup exec files to.. got an index of 54 MB/s and Seq writes of 55MB/s. If I uncheck bypass windows cache (more real world?).. this value becomes 38 MB/s with seq. writes of 41 MB/s (On a regular desktop the sataII index was 13 MB/s with seq. writes of 9 MB/s without bypassing windows cache, so this value on ServerA seemed very very good to me).

According to the ref. drives, Sisandra says that for a 2 disk RAID0, SATAII, I should see an index of around 96 MB/s, not much faster than the Raid5 value, I already have, though I suppose with 4 disks that should be around 180 MB/s even with SATA, not SCSI?


Here are those results (as you'll see, even my SCSI320 RAID5, comes very close to the SATAII RAID5):


SERVERA E: SATAII (independent)

SiSoftware Sandra

Benchmark Results
Drive Index : 54 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 7 ms
Results Interpretation : Lower index values are better.

Performance Test Status
Run ID : VRGEN on Sunday, March 25, 2007 at 10:06:14 PM
System Timer : 2.8GHz
Operating System Disk Cache Used : No
Use Overlapped I/O : Yes
IO Queue Depth : 4 request(s)
Test File Size : 2GB
File Fragments : 1
Block Size : 1MB
File Server Optimised : Yes

Benchmark Breakdown
Buffered Read : 685 MB/s
Sequential Read : 62 MB/s
Random Read : 43 MB/s
Buffered Write : 418 MB/s
Sequential Write : 55 MB/s
Random Write : 46 MB/s
Random Access Time : 7 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 466GB
Free Space : 453GB, 97%
Cluster Size : 4kB

-------------

SERVERA E drive (cache on):

Benchmark Results
Drive Index : 38 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 12 ms
Results Interpretation : Lower index values are better.

Performance Test Status
Run ID : VRGEN on Monday, March 26, 2007 at 11:18:06 AM
System Timer : 2.8GHz
Operating System Disk Cache Used : Yes
Use Overlapped I/O : Yes
IO Queue Depth : 4 request(s)
Test File Size : 2GB
File Fragments : 1
Block Size : 1MB
File Server Optimised : Yes

Benchmark Breakdown
Buffered Read : 679 MB/s
Sequential Read : 42 MB/s
Random Read : 28 MB/s
Buffered Write : 422 MB/s
Sequential Write : 41 MB/s
Random Write : 47 MB/s
Random Access Time : 12 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 466GB
Free Space : 453GB, 97%
Cluster Size : 4kB


-----------------

SERVERA D RAID5 SATAII:
Benchmark Results
Drive Index : 84 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 26 ms
Results Interpretation : Lower index values are better.

Performance Test Status
Run ID : VRGEN on Sunday, March 25, 2007 at 10:05:47 AM
System Timer : 2.8GHz
Operating System Disk Cache Used : No
Use Overlapped I/O : Yes
IO Queue Depth : 4 request(s)
Test File Size : 2GB
File Fragments : 1
Block Size : 1MB
File Server Optimised : Yes

Benchmark Breakdown
Buffered Read : 806 MB/s
Sequential Read : 123 MB/s
Random Read : 29 MB/s
Buffered Write : 426 MB/s
Sequential Write : 105 MB/s
Random Write : 42 MB/s
Random Access Time : 26 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 466GB
Free Space : 128GB, 28%
Cluster Size : 4kB

------------------------


ServerB U320 SCSI D drive (RAID5):

SiSoftware Sandra

Benchmark Results
Drive Index : 96 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 3 ms
Results Interpretation : Lower index values are better.

Performance Test Status
Run ID : EXCHANGE01 on Saturday, March 24, 2007 at 9:01:51 AM
System Timer : 2.8GHz
Operating System Disk Cache Used : No
Use Overlapped I/O : Yes
IO Queue Depth : 4 request(s)
Test File Size : 2GB
File Fragments : 1
Block Size : 1MB
File Server Optimised : No

Benchmark Breakdown
Buffered Read : 102 MB/s
Sequential Read : 113 MB/s
Random Read : 87 MB/s
Buffered Write : 272 MB/s
Sequential Write : 97 MB/s
Random Write : 34 MB/s
Random Access Time : 3 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 137GB
Free Space : 94GB, 69%
Cluster Size : 4kB


-----------------

ServerB U320 SCSI E drive (independent):
SiSoftware Sandra

Benchmark Results
Drive Index : 50 MB/s
Results Interpretation : Higher index values are better.
Random Access Time : 15 ms
Results Interpretation : Lower index values are better.

Performance Test Status
Run ID : EXCHANGE01 on Monday, March 26, 2007 at 11:50:58 AM
System Timer : 2.8GHz
Operating System Disk Cache Used : No
Use Overlapped I/O : Yes
IO Queue Depth : 4 request(s)
Test File Size : 2GB
File Fragments : 1
Block Size : 1MB
File Server Optimised : No

Benchmark Breakdown
Buffered Read : 55 MB/s
Sequential Read : 62 MB/s
Random Read : 32 MB/s
Buffered Write : 72 MB/s
Sequential Write : 53 MB/s
Random Write : 50 MB/s
Random Access Time : 15 ms (estimated)

Drive
Drive Type : Hard Disk
Total Size : 137GB
Free Space : 129GB, 94%
Cluster Size : 4kB



 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top