I have a quick question, does anyone know when is the NSR database considered to large and at what point in time will it start to stop performing well?
I can tell you what I know about my index DB. I have 181 Clients with a total index size of 32GB. Remember the file index is not one big file per client like it was in 5.x. It's one file per saveset (acutally 3 - .rec, .k0, and .k1.). I haven't seen any noticable performance issues as its grown over time. With the following exceptions:
Recovering the client indexes takes a VERY long time for clients that have lots of savesets. For example, I have one client that has about 37,884 savesets (meaning 113,656 small files in one directory). This puts a tremendus load on the disk to copy and verify all these files. This recovery takes about 5 hours for only about 1 GB of data.
The other thing is backing up and recovering savesets that have many files within them. I have one client that has about 300,000 very small files on one filesystem, but the total size of the filesystem is only about 4 GB. This backup should only take minutes, but actually takes about 2 hours because a lot of time is spent updating the file index. It also takes a ery long time to start recovering. Once you hit start, you wait about an hour before that tapes start loading.
The issues I've had seem to be only related to extreme cases. Everything else works just fine regardless of the overall size of the DB.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.