Chris Miller
Programmer
Here's a short demo of fixed-size logging with a DBF log file.
The main idea is to populate a fixed number of log records, which means the log table will already start with a specified size to fit in as many messages as you want to be able to look back. This size then is kept constant. At least my check results in 0 bytes size changes after logging thousands of messages, of which only the last 10 remain in the logging table. The demo log size is only 10, but that's a constant you can and will obviously adapt to what you need. That depends on what volume of log messages you have per second, hour, day, year, whatever.
To run this, adjust the class property LogTable so that this DBF file can be created, i.e. the directory for the log table already exists.
It will show a browse of the log after it's filled with 10 messages. Then 2 further messages are logged, each time browsing again. You'll see the browse record count will stay constant. Message 11 replaces message 1 and message 12 replaces message 2. To progress in the demonstration just close the browse window each time it pops up. After the third browse is closed, a test will start to log thousands of messages (without showing them, of course) and monitor the total size of the log DBF and CDX files to check for any size changes. I didn't detect any bloat, so the log file really keeps a fixed size.
This has just the slight disadvantage of starting with the maximum size log file already. On the other hand, the logging process is as simple as you can make it, just a LOCATE followed by a REPLACE. Use the log DBF with that index and you have records in chronological order. Without using the index, the oldest record can be anywhere, so always view the log table in index order - or in descending index order to see the last messages first.
The logging is also done on the log DBF opened exclusively to avoid any interference, and if the logger doesn't get exclusive access it throws an error and will deny working. But you may change this to shared usage and, of course, adapt this in any other way for your needs.
Notice how sorting with Sys(2015) works on the basis of it being a string that sorts in chronological order. Sorting by the datetime field is not good enough, as that is only precise to the second and you could have very many records in the same second. SYS(2015) will always ascend alphabetically. In shared usage, records of different client computers may interleave, if computer clocks are not synchronized, but that doesn't stop the mechanism to work, records of the same second from different clients just may not be recycled in exact order, but that will never matter, as they are exceeding their "best before" date in the same second anyway.
You can for example extend this by archiving log entries that are interesting, adding further fields to the log, etc., etc. Of course, the log could also start empty and before reaching LOGRECCOUNT you APPEND BLANK before REPLACE, but if the log size is known from the first init, you can be sure logging will never fail because of insufficient disk space.
Chriss
The main idea is to populate a fixed number of log records, which means the log table will already start with a specified size to fit in as many messages as you want to be able to look back. This size then is kept constant. At least my check results in 0 bytes size changes after logging thousands of messages, of which only the last 10 remain in the logging table. The demo log size is only 10, but that's a constant you can and will obviously adapt to what you need. That depends on what volume of log messages you have per second, hour, day, year, whatever.
To run this, adjust the class property LogTable so that this DBF file can be created, i.e. the directory for the log table already exists.
It will show a browse of the log after it's filled with 10 messages. Then 2 further messages are logged, each time browsing again. You'll see the browse record count will stay constant. Message 11 replaces message 1 and message 12 replaces message 2. To progress in the demonstration just close the browse window each time it pops up. After the third browse is closed, a test will start to log thousands of messages (without showing them, of course) and monitor the total size of the log DBF and CDX files to check for any size changes. I didn't detect any bloat, so the log file really keeps a fixed size.
This has just the slight disadvantage of starting with the maximum size log file already. On the other hand, the logging process is as simple as you can make it, just a LOCATE followed by a REPLACE. Use the log DBF with that index and you have records in chronological order. Without using the index, the oldest record can be anywhere, so always view the log table in index order - or in descending index order to see the last messages first.
The logging is also done on the log DBF opened exclusively to avoid any interference, and if the logger doesn't get exclusive access it throws an error and will deny working. But you may change this to shared usage and, of course, adapt this in any other way for your needs.
Code:
* Logger usage
* loLogger = CreateObject("Logger") && once
* loLogger.LogMessage(cMessage) && for each log record
Local loLogger
loLogger = CreateObject("Logger")
Local lnI
* creating 12 messages in a log fixed to 10 records (LOGRECCOUNT defined as 10 for this purpose)
For lnI = 1 to 12
loLogger.LogMessage(Textmerge('message <<lnI>>'))
If lnI>9 && browse to see progress after logging message 10,11, and 12
Browse && first browse display 1-10, second browse 2-11, third browse 3-12, so records 1 and 2 are recycled.
EndIf
EndFor
Local lnJ, lnSize, lnOldSize
lnSize = 0
* Check, if the cdx grows over time
Clear
Set Notify cursor off
For lnI = 1 to 11
Release laLogtablesize
ADir(laLogtablesize,ForceExt(loLogger.logTable,'*'))
lnOldSize = lnSize
lnSize = laLogtablesize[1,2]+laLogtablesize[2,2] && sum dbf and cdx file sizes
If lnOldSize>0
? 'Log size change after 1000 logs:', lnSize-lnOldSize
EndIf
For lnJ=1 to 1000
loLogger.LogMessage('dumymmessage')
EndFor lnJ
EndFor lnI
* Result (for me): No size changes. That was the goal.
* Logger class definition
#Define LOGRECCOUNT 10
Define Class Logger As Custom
LogTable = 'c:\programming\tests\log.dbf'
Procedure Init()
Return This.OpenLogtable()
Endproc
Procedure CreateLogtable()
Use in Select('LogAlias')
Create Table (This.LogTable) (LogTime datetime, sortstring char(10), LogMessage char(254))
Local lnI
For lnI = 1 To LOGRECCOUNT
Insert Into (Alias()) Values (Datetime(),Sys(2015),'')
Endfor lnI
Index On sortstring Tag chron Ascending
Use Dbf() Alias LogAlias Order Tag chron Again Exclusive
Endproc
Procedure OpenLogtable()
If !Used('LogAlias') Or Not Upper(Dbf('LogAlias'))==Upper(This.LogTable)
If Adir(aDummy,This.LogTable)=0
This.CreateLogtable()
Else
Try
Use (This.LogTable) Alias LogAlias Order Tag chron In 0 Exclusive
Catch To loException
loException.UserValue = "Could not get exclusive access to log table."
Throw
EndTry
Endif
EndIf
Return (Reccount('LogAlias')=LOGRECCOUNT)
Endproc
Procedure LogMessage(tcMessage As String)
If This.OpenLogtable()
* locate the oldest log record
Locate
* replace it with the new log entry
Replace LogTime with Datetime(), sortstring with Sys(2015), LogMessage With tcMessage in LogAlias
Else
Error 'log table reccount not as expected'
EndIf
EndProc
Procedure Error()
LPARAMETERS nError, cMethod, nLine
If nError = 2059
? Message()
Cancel
EndIf
? nError, cMethod, nLine
EndProc
EndDefine
Notice how sorting with Sys(2015) works on the basis of it being a string that sorts in chronological order. Sorting by the datetime field is not good enough, as that is only precise to the second and you could have very many records in the same second. SYS(2015) will always ascend alphabetically. In shared usage, records of different client computers may interleave, if computer clocks are not synchronized, but that doesn't stop the mechanism to work, records of the same second from different clients just may not be recycled in exact order, but that will never matter, as they are exceeding their "best before" date in the same second anyway.
You can for example extend this by archiving log entries that are interesting, adding further fields to the log, etc., etc. Of course, the log could also start empty and before reaching LOGRECCOUNT you APPEND BLANK before REPLACE, but if the log size is known from the first init, you can be sure logging will never fail because of insufficient disk space.
Chriss