Galaxy offers two methods for file scanning. With NTFS 5 (Windows 2000/XP/2003) Galaxy uses the Windows Change or USN Journal. Each NTFS volume manages its own USN Journal. Every change to a file on that volume is logged with an Update Sequence Number (USN) to the respective volume's journal. At the start of each backup, the largest USN for each change journal is recorded by Galaxy as a reference point. If this is an incremental backup, each file's USN is compared against the reference USN recorded from the previous backup. All files with a USN larger than the reference USN are added to a collection file list for backup. If the backup is successful, its USN is kept as the new reference USN.
For all other file systems (NTFS 4, FAT, UFS, etc.) Galaxy uses its Classic File Scan method. With classic file scan, Galaxy records the start time of each backup as a reference time. If this is an incremental backup, the modification time (MTIME) of each file is compared to the reference time from the previous backup. If the file's modification time is later than the reference time, the file is added to a collection file list for backup. If the backup is successful, the start time of that backup is kept as the reference time.
The reference USN or modification time is maintained independently for each file system subclient. This independence supports the scheduling of different backup times and frequency for each subclient.
For Windows and Netware systems, the archive bit is also an indicator that a file needs to be backed up. Including the archive bit in the Galaxy classic file scan is an administrator+óGé¼Gäós choice. There could be circumstances where an application sets the archive bit without modifying the contents of the file. In this case, the administrator may, or may not, want to backup these files. With the classic file scan method, the option to include the archive bit as a backup indicator is enabled by default. The file system backup API resets the archive bit after it has been successfully backed up.
The File System iDataAgent Properties page presents the option (if available) to choose the change journal or classic file scan method. The file scan method option is only available for a subclient whose content resides exclusively on NTFS 5 file volumes. All others file systems are forced to the classic file scan method
When given a choice, which file scan method is better? If the subclient content covers whole NTFS volumes (vice just specific directories within the volume(s)) the change journal should be faster. However as the number of files increase, the performance of the change journal scan can drastically decrease. This is due to Galaxy caching the change journal information in memory during the scan. If the number of files is large enough, it may force the system to start swapping memory pages. This can slow the scan process significantly.
The classic file scan does no memory caching. It+óGé¼Gäós a basic disk read of the modification time and archive bits for all files in the subclient content path(s). For a large number of files or for subclient content path(s) less than a volume level, the classic file scan can be faster.
Another reason why the Change Journal scan can be slow is that it logs every error encountered during the scan. These errors are logged in the filescan.log file. With a significant number of errors the act of logging the errors can drastically slow down the scan process. If you are experiencing slow scans, check the filescan.log file on the iDataAgent for errors.
One common error encountered during a Windows file scan is permissions. By default, Galaxy operates under the Windows Local System Account (LSA). The System account, like the local administrator, has access to all files via the Everyone account. If the Everyone account access is removed or limited and the System account is not inherited or explicitly given access you may have permission errors when scanning that file. These errors can drastically slow down your scan and impact your ability to backup and restore files.
Both scan methods might be slowed if there is the need to restore an index from backup. When a file scan is initiated, the Client/iDataAgent contacts the data path MediaAgent to see if an existing index needs to be created, updated or if the index is not in cache, restored from backup. If the drives are all being used, it might take some time before the index is restored. If you are experiencing slow file scans, check the createindex.log on the MediaAgent to see if this is the case.
Additionally, as part of the scan process, an image file is created on the MediaAgent by each incremental file system backup. The image file is a full map of all files whether backed up or not that were scanned in the subclient's content path. Data path performance problems between the Client and MediaAgent can delay the writing of this image file. Again, check the createindex.log on the MediaAgent of see if there is an inordinate time spent on creating this image file.
If given the option of Change Journal or Classic File Scan method and you are experiencing slow scanning performance with one or the other which can not be traced to errors or waiting for an index to be restored - try the other method. Note that shifting scan methods will force a full backup. If the performance is still not acceptable, pass the filescan.log and createindex.log files to customer support for their evaluation.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.