Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to log cyclic with a predetermined fixed log size, maintenance free.

Status
Not open for further replies.

Chris Miller

Programmer
Oct 28, 2020
4,775
DE
Here's a short demo of fixed-size logging with a DBF log file.

The main idea is to populate a fixed number of log records, which means the log table will already start with a specified size to fit in as many messages as you want to be able to look back. This size then is kept constant. At least my check results in 0 bytes size changes after logging thousands of messages, of which only the last 10 remain in the logging table. The demo log size is only 10, but that's a constant you can and will obviously adapt to what you need. That depends on what volume of log messages you have per second, hour, day, year, whatever.

To run this, adjust the class property LogTable so that this DBF file can be created, i.e. the directory for the log table already exists.

It will show a browse of the log after it's filled with 10 messages. Then 2 further messages are logged, each time browsing again. You'll see the browse record count will stay constant. Message 11 replaces message 1 and message 12 replaces message 2. To progress in the demonstration just close the browse window each time it pops up. After the third browse is closed, a test will start to log thousands of messages (without showing them, of course) and monitor the total size of the log DBF and CDX files to check for any size changes. I didn't detect any bloat, so the log file really keeps a fixed size.

This has just the slight disadvantage of starting with the maximum size log file already. On the other hand, the logging process is as simple as you can make it, just a LOCATE followed by a REPLACE. Use the log DBF with that index and you have records in chronological order. Without using the index, the oldest record can be anywhere, so always view the log table in index order - or in descending index order to see the last messages first.

The logging is also done on the log DBF opened exclusively to avoid any interference, and if the logger doesn't get exclusive access it throws an error and will deny working. But you may change this to shared usage and, of course, adapt this in any other way for your needs.

Code:
* Logger usage
* loLogger = CreateObject("Logger") && once
* loLogger.LogMessage(cMessage) && for each log record

Local loLogger
loLogger = CreateObject("Logger")
Local lnI
* creating 12 messages in a log fixed to 10 records (LOGRECCOUNT defined as 10 for this purpose)
For lnI = 1 to 12
   loLogger.LogMessage(Textmerge('message <<lnI>>'))
   If lnI>9 && browse to see progress after logging message 10,11, and 12
      Browse && first browse display 1-10, second browse 2-11, third browse 3-12, so records 1 and 2 are recycled.
   EndIf 
EndFor 

Local lnJ, lnSize, lnOldSize
lnSize = 0

* Check, if the cdx grows over time
Clear
Set Notify cursor off 
For lnI = 1 to 11
   Release laLogtablesize
   ADir(laLogtablesize,ForceExt(loLogger.logTable,'*'))
   lnOldSize = lnSize
   lnSize = laLogtablesize[1,2]+laLogtablesize[2,2] && sum dbf and cdx file sizes
   If lnOldSize>0
      ? 'Log size change after 1000 logs:', lnSize-lnOldSize
   EndIf 

   For lnJ=1 to 1000
      loLogger.LogMessage('dumymmessage')
   EndFor lnJ
EndFor lnI
* Result (for me): No size changes. That was the goal.


* Logger class definition
#Define LOGRECCOUNT 10   
Define Class Logger As Custom
   LogTable = 'c:\programming\tests\log.dbf'

   Procedure Init()
      Return This.OpenLogtable()
   Endproc

   Procedure CreateLogtable()
      Use in Select('LogAlias')
      Create Table (This.LogTable) (LogTime datetime, sortstring char(10), LogMessage char(254))
      Local lnI
      For lnI = 1 To LOGRECCOUNT
         Insert Into (Alias()) Values (Datetime(),Sys(2015),'')
      Endfor lnI
      Index On sortstring Tag chron Ascending
      Use Dbf() Alias LogAlias Order Tag chron Again Exclusive
   Endproc

   Procedure OpenLogtable()
      If !Used('LogAlias') Or Not Upper(Dbf('LogAlias'))==Upper(This.LogTable)
         If Adir(aDummy,This.LogTable)=0
            This.CreateLogtable()
         Else
            Try
               Use (This.LogTable) Alias LogAlias Order Tag chron In 0 Exclusive
            Catch To loException 
               loException.UserValue = "Could not get exclusive access to log table."
               Throw
            EndTry 
         Endif
      EndIf
      
      Return (Reccount('LogAlias')=LOGRECCOUNT)
   Endproc

   Procedure LogMessage(tcMessage As String)
      If This.OpenLogtable()    
         * locate the oldest log record
         Locate
         * replace it with the new log entry
         Replace LogTime with Datetime(), sortstring with Sys(2015), LogMessage With tcMessage in LogAlias
      Else
         Error 'log table reccount not as expected'
      EndIf 
   EndProc
   
   Procedure Error()
      LPARAMETERS nError, cMethod, nLine
      
      If nError = 2059
         ? Message()
         Cancel
      EndIf 
      
      ? nError, cMethod, nLine
   EndProc 
EndDefine

Notice how sorting with Sys(2015) works on the basis of it being a string that sorts in chronological order. Sorting by the datetime field is not good enough, as that is only precise to the second and you could have very many records in the same second. SYS(2015) will always ascend alphabetically. In shared usage, records of different client computers may interleave, if computer clocks are not synchronized, but that doesn't stop the mechanism to work, records of the same second from different clients just may not be recycled in exact order, but that will never matter, as they are exceeding their "best before" date in the same second anyway.

You can for example extend this by archiving log entries that are interesting, adding further fields to the log, etc., etc. Of course, the log could also start empty and before reaching LOGRECCOUNT you APPEND BLANK before REPLACE, but if the log size is known from the first init, you can be sure logging will never fail because of insufficient disk space.

Chriss
 
This is very interesting, Chris. I assume it arose out of the discussion in thread184-1820328. I wish I had thought of the idea twenty years ago.

I was working on an application with a large number of users, where the management wanted every update (insert, update, delete) to be logged. I did it by inserting a new record into a DBF for each of those event. The DBF very quickly approached 2GB, so I added some code to periodically check the size, and if necessary delete the oldest record and then pack the table. Obviously that could only be done when exclusive use was available, which was during the monthly maintenance routine.

It all worked well enough - and has been for the last twenty years. But there is always the fear that the table might suddenly start growing much faster and therefore reach 2GB before the users could do anything about it.

Having a fixed-length DBF would have been a good solution. It would contain exactly the number of records that would make it a bit under 2 GB, indexed on the datetime field. Every time an action needed to be logged, it would first search for the first record with a blank datetime, and if not found it would then search for the one with the earliest datetime. Once a record was found, it would overwrite it with new data.

I will keep this in mind if the need ever arises again.

Mike




__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike Lewis said:
it would first search for the first record with a blank datetime, and if not found it would then search for the one with the earliest datetime
Which is yet another idea to find the relevant record to update, yes.

I avoid needing to look for blank datetimes by filling in datetime() into the initial records, too. By the way, all that would work even simpler with default values you can define in DBFs part of a DBC.

The LogMessage concept was the starting point of this. If you sort data chronologically by index, the top record in sort order is the oldest and found simply by LOCATE, then replacing the datetime with the current datetime, it moves to the bottom of the list in sort order while it stays the same recno. Thus you get the cyclic nature of the data. And this sorting is rarely done by an index, as you usually have chronological order just by physical order. The overall effect is that you write in record 1 to LOGRECCOUNT and then start over at record 1 again, another way to see the cycling effect of this.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top