Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Archive bits and ability to lock access time in AIX 4.x.x??

Status
Not open for further replies.

andyc333

MIS
Jun 19, 2003
10
US
Hey all,

I was wondering if in AIX 4.2.1 and 4.3.3 if there is such a thing as an archive bit in the JFS filesystem? I know that newer releases of AIX (and JFS) have this but I'm not sure if there is for older versions.

Also is there is a way to preserve the "access time" in the inode table on these versions of the OS? We want to be able to know when a file was last accessed but everytime we do a weekly backup, we're accessing all the files. We'd like to be able to know if a file that hasn't been used in more than 3 months so we can remove it.

Thanks,
andyc333
 
Hi Andy

I wrote a script using following command to remove files
3 months older :
# find /Dir -atime +90 -exec rm {} \;

# find /dir -mtime +90 -exec rm {} \;

a flag for accessed file

m for modified file

This worked fine for my purpose...

sushveer
IBM certified specialist-p-series AIX5L System Administration
AIX/SOLARIS/WEBSPHERE-MQ/TIVOLI Administrator
 
Thanks sushveer. I have something similar to that but the problem is when we perform backups, via tar/cp -r/commercial products the access time gets updated. We have to do a weekly full backup (and daily incrementals) due to the large amount of data we have (3+ TB). So now you can see our dilema. We can't use the find -atime to really know when the last time a user accessed a file since our backup tools always change that access time.

You wouldn't happen to know another solution? We were thinking of writing a wrapper to save all the file attributes (via istat) and then run the backup and then run a script to restore the times via "touch". But with 3+ TB of data the save/restore wrapper would take way too long.

Thanks,
andy


 
#!/usr/bin/ksh

#---define a file to compare against
STAMP=/etc/timestamp #---or whatever

#----get list of files modified since previous run
LIST=`find . -type f -newer $STAMP -print`

#----preserve file access times for all files in $LIST
#----also update the last modification date for $STAMP
#----note: I'm using perl to stat the files, but you could use istat
perl -e '
while($file = shift) {
($ss, $mm, $hh, $DD, $MM, $YY) = localtime(scalar((stat($file))[8]));
printf "%s %04d%02d%02d%02d%02d\n", $file, $YY + 1900, $MM + 1, $DD, $hh, $mm
}' $LIST > $STAMP

#----backup commands using $LIST would go here

#---restore file access times
while read FILENAME FILESTAMP
do
touch -a $FILESTAMP $FILENAME
done < $STAMP

#----don't delete $STAMP file so that it exists for the next run


PS. If you want to do a complete weekly backup then
touch 197001010000 $STAMP
 
Thanks Ygor! Now I don't have to waste time coming up with a wrapper of my own.

Andy
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top