hutchingsp
Technical User
I have a large file server with a SAS attached library with two LTO3 drives (Dell TL4000).
So, my main storage policy is set to use 2 streams with multiplexing of 4.
My File System iDataAgents are set to use between 4 and 8 data readers based on what I've (fairly crudely) determined to yield the most throughput.
The clients vary between a couple hundred gig and three tb or so.
What I'm seeing is that often a backup job for a large server will start with all 8 available data readers in use (4 to each physical LTO drive so the backup job is writing to both drives) and as the job continues and readers complete, the total reader count reduces, until, say only 1 reader is in use on each drive.
At this point other jobs will start and will take the number of readers per drive up to 4 again.
The problem/issue is that I seem to only be able to get 4 readers to each drive, and those readers can't saturate the drive.
Is there any way with the available drives to simply get more throughput to the drive?
I know the drives will each do 60-70mb/sec if fed from quick enough spindles, and the chunk size is 256kb which seems to yield best results, but I think the issue is more to do with the types of files (total mix of sizes and quantities) and the limits of what each individual reader will pull off disk vs. what the physical disk subsystem can provide.
Bit of a ramble there but fingers crossed it makes sense!
So, my main storage policy is set to use 2 streams with multiplexing of 4.
My File System iDataAgents are set to use between 4 and 8 data readers based on what I've (fairly crudely) determined to yield the most throughput.
The clients vary between a couple hundred gig and three tb or so.
What I'm seeing is that often a backup job for a large server will start with all 8 available data readers in use (4 to each physical LTO drive so the backup job is writing to both drives) and as the job continues and readers complete, the total reader count reduces, until, say only 1 reader is in use on each drive.
At this point other jobs will start and will take the number of readers per drive up to 4 again.
The problem/issue is that I seem to only be able to get 4 readers to each drive, and those readers can't saturate the drive.
Is there any way with the available drives to simply get more throughput to the drive?
I know the drives will each do 60-70mb/sec if fed from quick enough spindles, and the chunk size is 256kb which seems to yield best results, but I think the issue is more to do with the types of files (total mix of sizes and quantities) and the limits of what each individual reader will pull off disk vs. what the physical disk subsystem can provide.
Bit of a ramble there but fingers crossed it makes sense!