Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Defragging 3

Status
Not open for further replies.

SamDemon

Technical User
May 28, 2003
171
0
0
GB
I am not sure if this is the best place for me to place this query, so I apologise if it isnt.

In our office there are 6 servers which are all now starting to look a little fragmented. I would like to defrag them, but it is something that i have never done before and i was wondering what the possible problems are.

I will be using the Windows defrag and not anything external.

Thanks

Sam

It's just common sense, shame sense isn't common!
 
It depends on the file system and how much file manipulations that copy and delete, or change the size of files. Fat and NTFS are completely different. NTFS will 'try' to put a file in a contiguous area on the disk. So for example you have three files:

XXyyyZZZ

and you take file yyy and edit it, making it bigger, then saving it, Fat might look like

XXyyyZZZy

Where NTFS will look more like

XX ZZZyyyy

NTFS tends to fragment free space rather badly. If you have mostly small files this isn't really a big deal, but if you add or modify larger files, eventually you're going to run out of contiguous freespace to put a new or bigger file in. So if you add file oooo to the above you might get

XXoooZZZyyyyo

So it pays to defrag and defrag often. You CAN defrag a server in the middle of the day. The performance hit is minor, but if files are in use they will be skipped over, so the defragmentation will not be complete.

Once you do a defrag, keep doing it. Often. Daily wouldn't hurt.
 
Tony,

I would be interested in seeing some official documentation on the claim that NTFS actually "tries" to make a file contiguous.

On my system partition, for example, I have about 8GB of used space and 25GB of free space. The file system is NTFS. Until just the other day, it was over 28% fragmented.

If what you are saying is true about NTFS, then there's no way I should have been that fragmented with so much free space available for contiguous files.

Where am I going wrong?

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Great link, Tony. It's interesting that MS doesn't mention FAT12 anymore, has it been completely abandoned?
 
Lawnboy,
You are probably referring to FAT16, and yes, Microsoft ditched it a long time ago (Win 95 OSR B). Remember, FAT32 is nearly identical to FAT16, with the exception that it isn't as restricted in volume size or the # of files. FAT16 can only manage up to 65,000 files per partition.

Check the article I posted below which has a chart comparison.


Tony,
From the context of that article, it would seem that it doesn't necessarily apply to ONLY NTFS. What you posted is from a section in the article that talks about "file systems" in general.


After digging around some more, I found this article:

[blue]"NTFS works and works and is fragmented - even in the case of free space is far from exhausting. This is promoted by the strange algorithm of finding free space for file storage...It is impossible to say that NTFS prevents file fragmentation. On the contrary it fragments them with pleasure. NTFS fragmentation can surprise any person familiar with file system operation in half a year of work."[/blue]

That is about as in-depth as most would care to go. It's even a bit hard to read at times, but then again, I don't think English is Dmitrey Mikhailov's first language!

Here's another good one:


The main differences I'm seeing with NTFS over FAT is more effective searching capabilities which are less susceptible to fragmentation. From what I've read, NTFS doesn't necessarily try to keep files from fragmenting. It just makes sure that data is written in such a way that it doesn't hit search performance by much. In addition, NTFS has better "fault tolerance" than FAT when a file is being written or moved into a bad sector on the disk.

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Right, but isn't it a subset of FAT16 and FAT32? I mean, what's the point of it still being around when more recent versions of FAT can accomplish the same feats?

If you're thinking performance, then I would counter that by saying anyone concerned about it wouldn't be using a floppy to begin with!

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Lawnboy,

Fat12 is still used on floppies I think. In theory you could use it on a hard drive, but I think you'd be limited to something like 32 meg. Fat16 works up to 512 meg.

cdogg,

FAT (any flavor) will put the first cluster of a new file in the first available cluster of the disk, without regard to how many contigious clusters there are, and subsequent clusters of the file into the next subsequent cluster it finds (a caveat, If overwriting an existing file, FAT will re-use the first cluster of the old file as the first cluster of the new one. That is how it's possible for the end of the file to be on the disk before the begining).

HPFS (OS/2) and NTFS try to copy the file into a contiguous space if possible. However the system is not perfect. One illustrative example is file dowloading from the internet. Often the O/S doesn't know what the final size of the file will be, so it uses the same 'first cluster' method that FAT does. File sharing programs are even worse, as you are probably downloading several MP3's at once, each block being the end of the file at that particular time (it's not really the file system's fault at this point it simply honored the request from the O/S to find a (whatever block size) chunk of space to put this file (which is really only part of a file, but the O/S doesn't know it either) into.)

If you want to perform a test to illustrate it working properly:

1. Run defrag and do an analysis, making note of the fragmented file names.

2. Reboot into safe mode (making sure nothing else is running) and move one of the listed files to a different drive, then copy it back to its original location.

3. Run defrag analysis again. You will see that the file you selected is no longer on the list, assuming you had enough contiguous free space to hold it.

This gives a hint at how you can defrag your files without using defrag. You can even do this on the same drive if you have enough space. Simply copy the files to a new directory, then move them back. Of course, this does nothing to defrag the free space.
 
Tony,
Actually, FAT16 goes up to 2GB. Also, an illustration of the NTFS algorithm you're referring to is actually shown in the link I posted above!
[wink]

After doing my own research, I believe the way files are written by NTFS is not as important as the way it reads them. Because of the MFT (Master File Table), there are metatags on every file. Searching is much more efficient on large volumes in NTFS than it is in FAT.

I think what you are missing is this assumption that fragmentation is worse in FAT than it is in NTFS. The reality is that it occurs in both as a result of different condititions. Certain "experiments" might show that FAT fragments more than NTFS in some cases, and vice versa in other tests. Typically, you'll see better overall performance with FAT on smaller volumes, and better performance overall with NTFS on larger ones. But all that doesn't matter.

The important concept to grasp is that NTFS doesn't "suffer" from fragmentation as much as FAT does. It still becomes fragmented, but it takes a greater amount to affect NTFS than it does the performance of FAT. The reason, again, has to do with advanced search algorithms and the addition of the MFT in the NTFS file system.

Note:

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top