Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Disk usage analyzer

Status
Not open for further replies.

acl03

MIS
Jun 13, 2005
1,077
0
0
US
I'm looking for a software, preferably free or cheap, that will analyze my 5-TB fileserver and give me statistics on file size, usage, etc.

I am moving to a new file server, and I'm trying to figure out the best way to configure the volumes (particularly the cluster size).

Any thoughts or other ideas? Thanks.

Thanks,
Andrew

[smarty] Hard work often pays off over time, but procrastination pays off right now!
 
I have always used apps like TreeSize to get a quick snapshot of the storage situation. It gives a nice breakdown of which directories and subdirectories, including how much space they are using. You can also do analysis by file types so you can see if your users have stashed away 200 GB of MP3 files that shouldn't be there, etc. The one thing that I don't think that it does is give you information about last used or anything around frequency used, but I'm sure that some of the commercial archiving solutions can do that.

I wrote a script a few years back that would crawl a directory tree and dump a list of all files that haven't been accessed in more than X days, it wasn't too difficult to do. In fact I'm pretty sure that I posted it in the VBscript forum here.

________________________________________
CompTIA A+, Network+, Server+, Security+
MCTS:Windows 7
MCSE:Security 2003
MCITP:Server Administrator
MCITP:Enterprise Administrator
MCITP:Virtualization Administrator 2008 R2
Certified Quest vWorkspace Administrator
 
Thanks. I'll check that out.

One of the reasons I am looking into this is because we use Diskeeper (defrag software). We also use volume shadow copy.

Diskeeper's docs say that if you use smaller than a 16kb cluster size (default for our size volume is 4k), the defragging can cause extra snapshots to be taken, which more quickly overwrites the old snapshots. Diskeeper has a less-effective VSS defragging option that is less likely to do this...but does not remove the problem.

So I'd like to use a 16k, or larger if appropriate, cluster size. But without knowing the average file size of our millions of files, it's tough to figure out what cluster size is appropriate.

Is there a better/easier way to figure it out?


Thanks,
Andrew

[smarty] Hard work often pays off over time, but procrastination pays off right now!
 
Unless your server is basically a dedicated server serving a particular sized data chunk, I would not deviated from the default cluster size, and that goes for the stripe size on the raid interface. Changing either of the above is a trade off..you gain for a particular size data chunk, lose on other sizes...generally the performance goes down overall, except with dedicated servers. 16k clusters will also waste space on the drives also, as all small files will require at leat 16k


Setting Diskeeper to run at idle periods( night), daily , maintains approx a 5% overall disk increase. I do not like the auto setting, I do not want it running during peak hours; then again most of my client's networks are under 50 users.
Running the boot time defrag at least monthly increases resident programs such as SQL (with files which never close) throughput considerably.


........................................
Chernobyl disaster..a must see pictorial
 
What do you mean by "maintains approx a 5% overall disk increase"?

Do you mean you gain 5% performance? Free space?

Also - how exactly do you configure Diskeeper? Automatic mode...but disabled during working hours? Or do you just set up manual defrag scans at night?

Thanks.

Thanks,
Andrew

[smarty] Hard work often pays off over time, but procrastination pays off right now!
 
Also...do you use the "intelliwrite" option?

This option prevents files from being fragmented in the first place. Not sure if it hurts more than it helps.

Thanks,
Andrew

[smarty] Hard work often pays off over time, but procrastination pays off right now!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top