Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

What to do to avoid database being "corrupted"? 5

Status
Not open for further replies.

Mandy_crw

Programmer
Jul 23, 2020
585
PH
I have revised and revised my app, i have five database opened every time my app is opened, two application actually, all databases are shared by the two applications... in my exit i usually add a line Clear all, close database then programatically shuts down the computer... but lately, twice actually, two of the databases are always corrupt when the app is opened in the next day.... is there anything that i have to add to exit so i can avoid my database being corrupted? Thanks everyone... God bless.....
 
Mandy,
What is the corruption that you are getting? Can you be more specific?
Is it data that isn't presented, do you get a specific message saying it's corrupted?
Is it the database or the index that is corrupted?
Do you have General fields in the tables?

Also, if a machine "reboots" (like during a windows update, unattended), then the application may not get closed properly, which is the leading cause of a DB corruption. I also assume you mean the "table" is corrupt, and not the "Database".


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Oh, one additional question, what kind of locking do you use?
(Row, table, optimistic, pessimistic)?


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Oh yes Scott i mean table… not database… i just FLOCK() the files… i dnt know whatvis pessimist and optimistic… the table is corrupted, everytime i open it, it tells in the dialog box that the table is correupted…
 
Mandy,

first when tables are corrupt of course you repeatedly get the same message, they don't repair themselves just by waiting.

Is it the Error 2091
Table "name" has become corrupted. The table will need to be repaired before using again.

There is a helptext for each error. And the help for this is:
Foxpro Help for Error 2091 said:
Either the table record count does not match the actual records in the table, or the file size on the disk does not match the expected file size from the table header. Repair the table using an appropriate third-party repair utility for Visual FoxPro tables before opening the table again.

So, there is no function within VFP to repair DBFs.

There are several tools for repairing tables. Foxfix from Xitech was quite popular, doesn't exist anymore.
I think most of us already have their tool and I honestly don't know what still exists and is recommendable.

You can try one thing before looking for a repair tool or recommendations of others:
1. It should be obvious to make a copy of your data files before doing anything to have a chance for another tool doing a working repair or a better repair. Some tools work with loss of some records, others try to fix details.
2. If you SET TABLEVALIDATE TO 0 you might be able to USE the corrupt table without error. That does not mean it has magically been repaired, you just told VFP to ignore errors.
If that works the next step is to APPEND BLANK and DELETE, so you add a new blank record and delete it. These two steps will store a correct reccount into the DBF header and you're done.
3. Last step after such a repair is to PACK the table to get rid of the record you appended and deleted. This is optional but becomes more important if there are any indexes not allowing double values, if your application uses APPEND BLANK a field that is blank might get a double value and that would also cause an index violation which would reject the new blank record, even though you deleted the blank record, because deleted records still exist in the DBF just marked as deleted and they are not removed from indexes, too.

You should be able to do all this with SET EXCLUSIVE ON, as noone can open the table. This makes the PACK easier, which needs exclusive access.

Chriss
 
Ok Chris... Thanks for the help... the table being corrupted is the table being zap when form is being closed... is zapping has anything to do with it?
 
ZAPping a table just like any other write operation can corrupt a table. It's unusual as zapping only empties the table by only keeping the header and setting reccount in it to 0. If I'd need tables empty again and again, I'd perhaps also ZAP them for simplicity, but you could also create empty tables with CREATE TABLE instead. I never heard of a DBF created that way to be corrupt, it would be a corruption right from the start.


PS: It's unusual, as ZAP requires exclusive access, so what should get into the way of it both truncating the file and changing the header?

If you don't want to learn how to recreate a table you did with the table designer interactively with a CREATE TABLE statement, the second easiest thing to do is take the table when its empty and copy the files of it as template to get back to that state.

One problem I see which could explain ZAP failing is that the problems Microsoft has with networking and related protocols like SMB could grant you exclusive access while you actually didn't really get it. Causing ZAP to fail for that reason. It's quite impossible to fail on the network when erasing files completely and then copying a healthy empty DBF to it.

Are there still concerns? Well, users using your app, of course. When this operation is done at a time one user wants to use the table, that of course fails, when the file doesn't exist for a moment. So it actually is bad to share a table that you want to empty from time to time. Either give a separate table to each user or create new tables with a new name and swap usage to them, session by session. Remove the concurrency problem and you remove the corruption problem. There's no way to fully eliminate it when you want to share data there has to be at least concurrent read access.

Chriss
 
By the way: There are two ways that have proven to be flawless about file corruptions:

1. Using terminal server.

That means the application is installed at one server, specialized on allowing multiple user sessions. Users connect to the server, not just to tables, they use the application via a network (which even includes internet connection, not just LAN) and the application actually runs on the server. It's desktop/display is transferred to users. There are license costs involved. And to support say 10 users in parallel is quite easy, as VFPs memory profile is very small. But supporting 100 users this will introduce more and more costs for licenses and a server that can have 100 user sessions running on it in parallel.

The big benefit is that to such a server side running application the data can be (and should be) on local drives. Which means data access becomes file access without going through network, eliminating any network problems.

2. Not using DBFs

This isn't meant ironically. Data can also be stored in database server like MSSQL or MySQL. They work on the principle that no data is directly accessed by clients anyway, the clients have to make requests which are all routed through the server, in the form of SQL, usually, and returning datasets. This solution also has license costs of the database. It has lower requirements on the server capacities, only the data access portion of an application runs server side, not the whole session including the forms and interaction, so this scales easier in costs, but will need a rewrite in client/server architecture instead of what you're used to working with DBFs and that's a hurdle many already failed to take.

Chriss
 
Hello,

just to add :

In this scenario a cursor may be better.
You can create them for example in form.load via create cursor or select from .. into cursor ...
In destroy you can close the cursor

A cursor is created in users tempfolder (usually ,local) and the file is deleted when closed, so corruption is usually not a problem.

Anyway
As suggested by Chris, SQL server based data is MUCH better.
We switched all our multiuserapps from dbf to MSSQL. And so we have no problems with Oplock, corruption , slow performance when second user starts program, registry patches requring SMB settings admins do not allow,...


Regards
tom
 
Mandy said:
i just FLOCK() the files

That's not at all the way to work with tables. You either USE table.dbf EXCLUSIVE or shared, SET EXCLUSIVE ON to do everything with exclusive access or off to do things shared. But an application sharing data should never need any exclusive access to data. So also forget about exclusive operations during the work schedule.

You can do things needing exclusive access administratively or during nighty scheduled jobs where nobody works, not at the time all or even just some users use the application.

While you might get exclusive access at some time, never forget that's needing more work more code for even just opening data shared, because now some users ccould have a file exclusive you can't be sure a USE works for any other user, neither shared nor exclusive.

That means if you program for shared access only and throughout everything in your application, you don't need to catch cases where you don't get access because someone else has exclusive access, things are simpler, much simpler that way.

If you want to have some exclusive access for short times to do something like ZAP, then you introduce the problem that putting a table into a DE or programming a USE ... SHARED for the normal case is failing once one user has exclusive access. You have to do twice the code of not more, to cover cases waiting for access. This can't just all be done with the REPROCESS setting. You easily get into trouble if you even want one feature that needs exclusive access sometimes.

And then also users can become a problem for other users, if they start something that has exclusive access and don't finish it.

There are enough problems due to the fact that nothing can be written to a file without at least temporary exclusive access. But don't take that task into your own hands, let automatisms do that for you. Any DBF writing commands like APPEND, REPLACE, DELETE, SQL-INSERT/UPDATE/DELETE all make the necessary locks automatic and with least problems, if you add manual locking, you just make the problem bigger, not smaller. Not with FLOCK ad also not even with RLOCK.

What you can and should do to protect data is adding transactions. But you also have to know how and when. You start a transaction if you have all changes you want to commit ready for storing and then start, go through all changes and end the transaction. That's also locking files but is working better than just manual locks. It can only have one of two results: Either you made all changes and succeed with ending the transaction or ending the transaction fails because of any network problem and the files revert to the state at the start of the transaction.

So programming for multiple users with shared access should then focus on doing exactly this and only this. You have no time for any exclusive access.

Chriss
 
Hi Chriss and Tom... Thank you for your suggestions... and thank you Chriss for a very very comprehensive explanation everytime... While i was repeatedly reading my code, i think i have found the culprit.. is it the timer that i have put? I have not stop the timer before zapping? How would i stop the timer when it is in Class? Thanks....

Define Class TimeTimer as Timer
Interval=1000 && every second
oLabel = .null.

Procedure Init()
_screen.AddObject('lblTime','Label')
This.oLabel = _screen.lblTime
This.oLabel.Visible = .f.
ENDPROC

Procedure Timer()

SELECT pendlog

DO WHILE .t.

IF RECCOUNT("pendlog") = 0
loop
ELSE
exit
ENDIF

ENDDO

GO top
DO WHILE !EOF()

IF ALLTRIM(UPPER(pendlog.status)) = "PENDING"

REPLACE STATUS WITH "Transferred"

insert into sentlog (idnum,nctr,recipient,mobile,araworas,status,message,senttime) values ;
(pendlog.idnum,pendlog.nctr,pendlog.recipient,pendlog.mobile,pendlog.araworas,"Transferred",pendlog.message,DATETIME())

IF pendlog.nctr = 5

MESSAGEBOX("Please do not pull power plug from the power outlet....",0+64+4096,"Monitoring!",10000)

SET SAFETY off

CLOSE DATABASES
USE shifts
ZAP

CLOSE DATABASES
USE pend
ZAP

CLOSE DATABASES

MESSAGEBOX("Computer shutting down!" + CHR(13) + ;
"Please click OK to shutdown now or wait for 10 seconds.",0+64+256+4096,"Monitoring!",10000)

thisform.release

RELEASE ALL

run /N7 shutdown -s -t 01

ENDIF
ENDIF
SKIP
ENDDO

This.oLabel.Caption = Transform(DateTime())
Endproc
Enddefine
 
Hi Mandy,

At the risk of over-simplifying, just a couple of simple questions which may help others help.

1. Have you tried replacing these files with your backup (preferably a backup created BEFORE the problem occurred)?

2. Do the dbfs have memo field(s)?

Steve
 
No, in itself there is no command that explicitly causes a corruption. That also is almost never the cause of corruptions. Corruptions don't happen by errors in code, they happen for concurrency reasons, most of the time, or flaky network hardware, or both and more. You can't get a grip on what exactly causes them when they happen, but surely you can program in ways that make defects more or less likely.

Let me look at some details:
Code:
DO WHILE .t.

IF RECCOUNT("pendlog") = 0
loop
ELSE
exit
ENDIF

ENDDO

1. You're doing a loop that has no definitive ending, so it can take much longer than the timer interval. Whenever you do such a thing in a timer, then first disable it with This.Enabled = .F.
2. Even with disabled timer ensuring you don't queue up even more timer events that still would need to be processed, the code you execute within the timer does not run in parallel to anything else in your application. Such a loop would disable user interaction until you get out of the loop. That's surely not what you want.

What would be better in such a timer is only make the reccount check once. If it's 0 there's nothing to do, you can end this timer event and next time the timer event happens you check again.

Further code seems to end the application. Which puzzles me a bit. The moment one record in pendlog appears, it will be processed and the app then ends? Is that really the intention?

You may have put this into an extra EXE because this stopped the application to react? Well, yes, of course. You created a loop that waits for a record in pendlog and before that doesn't appear the timer is the only code that runs. If you look at the process in the task manger it will bring one CPU core to 100% for doing that loop only. Surely not what you want.

A timer should only do something shorter than the interval it's meant for.

Code:
Define Class TimeTimer As Timer
   Interval=1000 && every second

   Procedure Timer()
      If Reccount("pendlog") = 0
         Return
      Endif

      This.Enabled=.F.
      Select pendlog
      Locate
      Scan For Alltrim(Upper(pendlog.Status)) == "PENDING"
         Replace Status With "Transferred"

         Insert Into sentlog (idnum,nctr,recipient,mobile,araworas,Status,Message,senttime) Values ;
            (pendlog.idnum,pendlog.nctr,pendlog.recipient,pendlog.mobile,pendlog.araworas,"Transferred",pendlog.Message,Datetime())

         If pendlog.nctr = 5
            Messagebox("Please do not pull power plug from the power outlet....",0+64+4096,"Monitoring!",10000)

            Set Safety Off

            Close Databases
            Try
               Use shifts Exclusive
               Zap
            Catch
               Messagebox("Could not empty shifts table...",0+64+4096,"Monitoring!",1000)
            Endtry

            Try
               Use pend Exclusive
               Zap
            Catch
               Messagebox("Could not empty pend table...",0+64+4096,"Monitoring!",1000)
            Endtry

            Messagebox("Computer shutting down!" + Chr(13) + "Please click OK to shutdown now or wait for 10 seconds.",0+64+256+4096,"Monitoring!",10000)

            Run /N7 Shutdown -s -T 01
         Endif
      EndScan

      This.Reset()
      This.Enabled=.T.
   EndProc 
EndDefine

This still has some requirements before the timer can work, the pendlog table has to be open.

You might not even need the check for reccount. If pendlog is empty, the SCAN loop will end where it begins, as there are no records. So that doesn't need an extra check. To get to the pending records as fast as possible an inde on UPPER(status) would help but then also not on ALLTRIM, and you'd check FOR UPPER(status)==PADR('PENDING',LEN(status)) to make best use of the index on UPPER(status).

I removed some things, because if you run shutdown, you can be sure your application ends and that will also close the form. There is no point in doing Thisform.Release(), you only risk destroying the timer on the form before it executes the shutdown. Anything that runs stops eventually anyway due to the shutdown.

And this will shut down the whole system once one record has pendlog.nctr = 5, are you sure about that? Is that what should shut down the system?

But, notice, this shutdown will only happen if the pendlog record with nctr=5 also was a PENDING record, you don't check this separately, you check this within the IF that first checks for the pending status.

As said above I don't see a point in zapping a DBF, needing exclusive access for this. Even if this only lasts shortly, as this process then ends and shuts down the whole computer. You can only zap with exclusive access. So if it is a condition these tables are empty at application start, then better delete them in the timer instead of zapping them and recreate them at start. Which still could fail if some other user/client/process uses them even just shared. Or, as Tom suggested use cursors. It seems to me only pendlog is getting data from other computers, and the tables you zap are exclusive for this monitoring process, this might not be the case, I can't tell for sure, so it's up to you to do so or not. But if you delete the data ending the process, then its just temporary data and cursors are ideal for that.

Chriss
 
Mandy said:
i think i have found the culprit.. is it the timer that i have put? I have not stop the timer before zapping? How would i stop the timer when it is in Class? Thanks....

Okay, I already said the answer to the last question: A timer is disabled by setting it's Enabled property to .f., but I don't think not disabling the timer alone causes the file corruption. Because the next timer event is after a second, then the whole computer already was in shutdown and the timer did perhaps even not exist anymore.

It could shed some light and help to know what happened, if you still have the defect table files. The error I think you had points out the file has the wrong size or reccount - depends on the point of view. A specific reccount means a specific size. If you ZAP a table it should become small, not 0 bytes, because there is a header section that has information about the fields and their types besides some other information, but small. And the reccount of a zapped dbf should be 0.

Did you try using SET TABLEPROMPT TO 0 to open them for repairing them?

But to also go back once more on the first question assumption its the timer: If you zap a file that is an allowed operation with a defined result not corrupting tables. Just like any insert/append/replace/update/delete is, nothing in any command is able to explicitly cause a corruption. What could cause corruption is doing a zap of a dbf you don't have in exclusive access. And usually VFP would hinder that anyway, even if you don't explicitly open a file exclusive. If you don't use a table exclusive zapping it also with safety off causes the error "File must be open exclusive". It's just that the network and it's protocols can get in the way of explicit truth about such states. Corruptions always have to do with network errors, some data packet coming over wrong, and/or concurrent access. Another user just slightly beforehand or afterwards did something.

And even if it's the table you zapped you never know if the zap itself worked and the corruption came from some other client acting on the DBF shortly afterwards when the file was available again for shared use before the last touch to it, putting the reccount back to 0 was done, for example.

Notice even a zap can't do two things at once, either the reccount is set to 0 first and then the part of the file after the header bytes are removed or the other way around. There always is a small chance the file got shortened but the reccount still is whatever it was before shortening. And then it could have been set to 0, but whatever client did an insert set it to the old reccount+1 instead of 1, as these operations overlapped and crossed.

It should not be possible because the whole file is only unlocked from exclusive access when the zap is final. But in todays network speedup mechanism of caching anything can be done in the wrong order. VFPs mechanisms are far older than current network protocols. And VFP isn't a supported product, so it also isn't tested anymore about how network protocol changes effect it, for example.

It's still a shame. I think Tamar Granor once wrote that it often was VFP which discovered some error or glitch in a Windows or file system change. But since its support ended 2010, including VFP in such tests isn't done anymore. By the way, it's also true the whole topic of SMB was a problem also before 2010.

All in all, be assured that you can't really do anything nor do you need to be afraid you or your code caused the corruption. It just happens. There's the theoretical idea of what should happen by planned procedure of code acting on the files and then there is the reality of hardware and physics and how things can happen in unplanned order. And protocols like the TCP/IP protocol already take into account that data can get lost on the way and have mechanisms fixing such problems, repeating packets, etc. But still things can go wrong.

If you still have the defect DBF it would be interesting to examine it. To see what is wrong in the header reccount and file size and if it's even more than that. If there's something else wrong, the culprit can be anything else. But this isn't a topic of finding an error and fixing it. It's not a topic of finding a culprit. It's just that errors can happen as the concept of files is complex.

Chriss.
 
Hi Chriss… those explanations were really great!!! There points that I've realized that some of my codes were really wrong… God i wish i could meet you… for the corrupted file, i still have it and thinking that i could still open it because it contain vital records, i will try to open it using your advises Chriss. Evrything that you said is really enlightening.. God how i wish i have that knowledge to Chriss… thank you so much for helping me always… How could ever repay you and everybody here in the forum… you are all great! Little by little i am learning… Thank you so much Chriss… its 4am but im just so excited to review my code and the code you have given… god bless Chriss…

Hi steve… i dont have a backup file of the table and it does not contain memo field… thanks so much Steve…
 
Code:
Hi steve… i dont have a backup file of the table...

Hi Mandy,

I had a feeling that was going to be the answer, but I just had to ask it anyway. It's the first question I ask my clients when they call me with a problem because it's often the easiest solution.

The usual answer is "No" (otherwise they might not be calling me). I plead with them from day one to back up data which if lost would ruin their business.

I build a backup routine in some of my apps, but even then, I can't control WHERE they put the files. Ideally, the files should reside OFF the computer. In the old days, some used tapes. These days, the "cloud" is one option (size, cost and privacy can become an issue). Another is a backup option offered by the host. I use external hard drives and a strong threat defense (BitDefender).

[End of Rant]

Steve


 
Thank you Steve... I'll do backup from now on...

Myearwood, yes i am employed but not paid for the app im making... as i mention before i was dabase+ to foxpro programmer, but since i've seen VFP looks desame, i studied it and later on it became hobby, i am just so happy that i still have the "programmer" in me... im also challenge because VFP shows a lot of capabilities compared to dbase+ and fox pro... the interface is alot different... so i am just really enjoying...and challenged... maybe later on I'll try to sell if someone will buy... [bigsmile][bigsmile] Thanks Myearwood....
 
Code:
Thank you Steve... I'll do backup from now on...

Great! I guarantee it will simplify your life and you will sleep better!![sleeping2][smile]

LOL

Steve
 
Hello Mandy,

just 2 suggestions.
Make sure that you have a close database in your code before terminating program.
For terminating see clear events and quit, exitprocess is , as mentioned , rarely used for that.

We have a systemfile included in the exe, pk_sys I, systype I, syswert M)
This sysdbf is filled via some code and filetostr() with icons, .. and the dbc/dct/dcx.
On startup we use this systemdbf, read the entrys and strtofile the syswert into users tempfolder and open it there. So its easy to update (because within delivered exe), no 1709 errors and no corruption due to network issues.

Regards
tom
 
Just one detail, as you recently also asked about data sessions.

Code:
Close Databases
This is not closing all database file usage when you have multiple forms using multiple - each private - data sessions.
But the need to close anything doesn't exist, when a VFP app closes it isn't just ending the process. What still done, also when you do ExitProcess, is the unloading of the vfp9r.dll, the FoxPro runtime that always starts first and also is unloaded last. That's simply triggered by the general DLL unloading process of Windows when a process ends. And you don't have to do anything about it.

Using data sessions, table closing is inbuilt into releasing a form as it also ends the data session that belongs to it, unless the form worked in the gobal default data session. But ending the process you end all forms and trigger all kind of closing processes, closing all files to which your process had explicit or implicit handles etc., including the global default data session.

Code closing files only is important to do so with more own control. Mainyl about the point in time and the order. That's why doing a shutdown is not a good move. Besides affecting all other processes on that computer, that's not just a topic of VFP.

To stick to the data session and more important the visual data environment of a form (an SCX, not a PRG based form class), this data environment has your tables in the visual section and they are not only opened when the form starts, they are also closed when the form releases. The D does not only implement initialization also the unloading. So there's actually nothing to do for you unless you find out something blocks the release of a form. That can happen, for example, by transactions not ended. Nevertheless I'm sticking to it, you can actually cook, eat and the dishes are done automatically. Even transactions are automatically closed, just usually by rolling them back, not committing=saving. And they can actually affect other users not getting file access as long as the transactions linger. That is the motivation for doing rollup code in VFP that closes tables and other files, commits transactions or rolls them back.

A VFP application also ends at the end of the main.prg. There does neither need to be an ExitProess a Quit, no Cancel, not even a Return. The last line of the main.prg ends the vfp process just like the last line of a method causes the return to the caller or to the READ EVENTS, if it wasn't code calling but an event happened. Again, we discussed this topic a lot recently in thread184-1817628. That was not your thread, but you posted and stated you read it, too. The use case for ExitProcess mainly is to have an administrative way out, and as much as I talked through many points, it's true the usual ending of a VFP application is by CLEAR EVENTS, in turn going to what is after READ EVENTS (unless a modal state like a MessageBox hinders that) and then exiting, when all code after Read events is done. And that includes the standard case that there is no code after READ EVENTS at all, or only class and function/procedure definitions that are not executed unless explicitly called.

Just to stress this out once more, CLEAR EVENTS actually ends READ EVENTS, code then continues from there onwards and that ends VFP. RETURN TO MASTER, as I explained in that thread, can cause the same as CLEAR EVENTS and can indeed interrupt an infinite loop, where CLEAR EVENTS doesn't have full power to end whatever currently runs and return after READ EVENTS. It's not a usual issue and therefore you don't usually need or do ExitProcess. Nor do you shutdown the computer. The end of a VFP process is just actually getting to the end of code. And READ EVENTS is just a stopper, a command telling VFP to stick to that line until events happen, which include any mouse or keyboard inputs, clicks, pasting, ole drag & drop, menu usage. Which also is the reason an application without READ EVENTS ends, because, well, it ends. Surprise, there is no more code to do, then good bye, I'm gone, finish, end.

People forget READ EVENTS or perhaps better said learn about READ EVENTS first, because the IDE is usually the thing that keeps the VFP IDE running when main.prg already ended. And so your nonmodal forms also keep running and an application seems complete and ready to build and deploy. Yet compiling and starting the built EXE what you removed is the command window and IDE, so getting to the end of main.prg ends the EXE. That's what I thin everybody has experienced in VFP usage.

So the only reason for writing code wrapping things up, closing all workareas, etc. is to get things done in the way and especially in the order you want them done. If you don't care and don't need to care, then also don't write such code.

If you have problems ending a VFP process, then it's likely a hurdle you built into a single form, a single class and have not programmed the end of each single thing the way it should be. Every Init() also has a Destroy(), that's the event for tidying up what hasn't already been done and that's the principle of self care. And it's the cleaner and in detail always easier way. People just think about general end code to catch anything missed in each special single case. It's overwhelming to write all destroy code, if you never did just even one of them, but to give a stronger reason, even when you use QUIT as quite similar "hammer" to end VFP as ExitProcess is, you still run through any forms QueryUnload, Unload, and Destroy event. It's also the main reason a form is a bit more complex in ending, you have three events, not just the Destroy. Take a look into that in the help.

By the principle, if anyone cares for themselves there is no need for a central carer, ever object should also tidy up about itself and only need to care for others, for child objects, if they are not programmed to care for themselves. And what do you put into Destroy events? The good news is: Most of the time nothing, you didn't forget much. Forms end their data session, for example. Ask yourself that same question what to do in Destroy, whenever you start something in any code anywhere. In some cases you process something from start to end, you already finish with it even within a button click, then there is nothing to do for destroy. In other cases you generate query results to display them and surely not close them right away. At some point it will be. Always think in brackets. You open something, it needs to close somewhere.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top