Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Refreshing a cursor in a Data session

Status
Not open for further replies.

AndrewMozley

Programmer
Oct 15, 2005
621
GB

I have a class MyClass with a private data session; in its INIT() method, mytable.dbf is opened (USE Mytable in 0).

There is a screen Myscreen.scx which itself opens Mytable (In Datasession #1, perhaps). It then creates an instance of MyClass (with Datasession #8, say). In due course MyScreen makes changes, perhaps adding a record NewRec1 to Mytable in DataSession #1.

When Myscreen calls a function in MyClass - say MyClass.Search() - the Search() function cannot see record NewRec1. I understand that; However I would like Search() to be able to see record NewRec1.

Is that possible? Is there a facility for refreshing the cursor for MyTable in DataSession #8?

Thank you
 
Well, if you save the new record you will see it in other data sessions. That's all to it.
So the question is why you don't yet save the new record.

And just a note about terms: If you open a table, it's table, not a cursor. So MyClass is seeing into the table, not into any cursor. Unless you withhold what you do in MyClass after opening the table. If you query data into a cursor, no matter in what datasession, that cursor may only refresh with the underlying table, if it's not a real cursor, but just a filter cursor to the table. Otherwise you need to requery the cursor, of course, it's not automatically updating.

I'll stop here, because there are too many eventualities, if this then that. The core thing is, any access to a table (dbf) just sees, what is in the file, any query, too. If you have a new record in a table and it's not seen in other datasessions, the most probably situation is, you buffer the new record. To make it available, searchable and findable you have to store it, you have to save the buffer, da a TABLEUPDATE.

Bye, Olaf.
 
I agree with Olaf here.

In the Word support forums, the answer to their #1 frequently asked question is "if you never saved it, there's nothing to recover".

If you haven't saved a new record yet, it doesn't exist anywhere but in the session that's actively creating it. You can't search it if it doesn't exist yet.
 
If you haven't saved a new record yet, it doesn't exist anywhere but in the session that's actively creating it.

That's right. But, just be clear, "saving" in this context means committing the edits, which in turn means performing a TABLEUPDATE() (assuming the table is buffered, of course).

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Thank you all.

What was happening is that the form was using transaction processing; I had erroneously issued two BEGIN TRANSACTION commands; so when the record was being saved, one END TRANSACTION command was not enough to do the business. In the circumstances the only way to complete the transaction was to issue a second END TRANSACTION.

The duplicate BEGIN TRANSACTION has now been removed.

I do not have a need for nesting transactions at present, but is there a function which can return the number of levels of transaction which have been started?

Andrew M.
 
The TXNLEVEL() function returns the number of transaction levels. So, for example, if you wanted to roll back all current transactions, you could do this:

Code:
DO WHILE TXNLEVEL() > 0
  ROLLBACK
ENDDO

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
...or of course END TRANSACTION instead of ROLLBACK. Because ROLLBACK of the outmost transaction would cause an overall Rollback, I assume, even if you end the innermost transaction with END TRANSACTION its not a final commit until the outmost transaction is commited, too.

The best solution indeed is to only begin as many transactions as you also end.
Now it depends on your class architecture how you start/end transactions, but if you nest them it points to an architectural design flaw.

You say you fixed this already. But let me asak: Do you start transactions at form start?
If so, that's the wrong time. A transaction should be put around updating the table, so it needs to be started before saving, not before starting to edit.

Bye, Olaf.
 
... or of course END TRANSACTION instead of ROLLBACK

Of course. I was only giving ROLLBACK as an example.

A transaction should be put around updating the table, so it needs to be started before saving, not before starting to edit

Absolutely right. The key point is that transactions involving imposing locks, and these should only be in place for the shortest possible time. In particular, you should never put any sort of user interaction inside a transaction - not even the dismissal of a message box.

Mike





__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Thank you. In the matter of when to begin a transaction.

On a form one might have an Edit button; if clicked it might enable several text controls bound to fields on the current record and also Save and Undo Buttons.

The user clicks Edit. When he starts entering data into the controls, does not the transaction need to have started, so that the Save button can issue an END TRANSACTION instruction and the Undo can issue a ROLLBACK instruction? How does the screen prevent the bound controls updating the table, as the data is entered?

Just curious!
 
You use buffering, as long as edits are in the buffer they are not in the DBF, so no need to start a transaction yet. If you do, you may even have a mismatch between optimistic buffering of data and a transaction preventing others to edit the same records, so you have optimistic and pessimistic locking of records at the same time.

A transaction in general is there to take a database from one state to the next valid and store eg an order AND order items, it's not there to wait for users to end their transaction session, you'll lock out too many other users with concurrent editing sessions, if you start transactions too early.

The help topic "Managing Updates Using Transactions" has two samples at it's end. It uses BEGIN TRANSACTION after modify of records with REPLACE in a buffered table. That's the way it's intended to be used.

The TRANSACTION here helps making it less of a hurdle to other users and to update all data belonging into one change. Be it a group of records or records of several tables. The transaction is almost optional, if you only update a single table, but it still can wrap up the update of a group of records, to either save all updates or none, not only some. The transaction is not there to lock and buffer data, though.

Bye, Olaf.


 
How does the screen prevent the bound controls updating the table, as the data is entered?

That's what buffering is for. In general, the edits act on data in a buffer. TABLEUPDATE() actually updates the table, and so will appear in your Save button. TABLEREVERT() cancels the changes, and so will appear in your Cancel button.

In this case, the transaction would be wrapped around the TABLEUPDATE(). But there's not usually any point in doing that if you are just updating a single table. Where transactions come into play is when you have several linked updates, for example when you are inserting an order, and then updating the record of the customer who placed the order. In most cases, the transaction is unnecessary.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Also note that when you wrap a Tableupdate() in a transaction, a ROLLBACK returns you to the buffered state. All pending edits remain in the buffer so you can fix whatever caused the update to fail and try it again.
 
Hi
From Hacker's Guide to Visual Foxpro 7
The transaction commands let you combine updates to a group of tables into a single process so all the changes either happen or don't happen. Transactions can be nested—TXNLEVEL() lets you find out how deep you are.
...
When you need to make updates to multiple tables and it's an all-or-nothing deal (like invoice headers and details), wrap the whole process in a transaction. BEGIN the TRANSACTION, then go through each table attempting to update it. If you succeed in updating all the tables, END the TRANSACTION and go on with your life. If you can't update one of the tables, ROLLBACK the transaction, and either figure out and solve the problem or revert all the tables to their original state. (You may even want to wrap single table updates in a transaction if they involve updates to multiple records.)

So why do we say "sort of?" Because there's still a small window of time where a system crash or a power outage could leave you with messed-up data. Don't throw out those uninterruptable power supplies just yet. Nonetheless, with transactions, the window's much smaller than before.

Each record the transaction affects is locked at the time it becomes part of the transaction. (In fact, not only are these records locked, but they can't even be read by other workstations, so you want to keep transactions as short as possible.) END TRANSACTION and ROLLBACK unlock all the affected records.

You can nest one transaction inside another. This might happen when, say, an order entry branches off to add a new customer. If both the customer and order entry save routines use transactions, and the order entry save routine calls the customer save routine, the order can't be saved unless the customer save is successful.

The limit for nesting is five levels—shades of the READ levels of old. Just as read levels could be checked with RDLEVEL(), transaction levels can be checked with TXNLEVEL().

Transactions apply only to tables (and views) contained in databases. You can't use them on free tables. To be more specific, you can do things to free tables when a transaction is in progress, but the changes can't be rolled back. For remote views, consider using the backend data source's transaction capabilities through SQLSetProp(), SQLCommit() and SQLRollback().

Example
* Save invoice header and details in a transaction.
* This is only a sketch of the real code, which would
* probably include various error checking and handling.

BEGIN TRANSACTION

lGoOn = .T.

SELECT header
IF NOT TABLEUPDATE()
lGoOn = .F.
ENDIF

IF lGoOn
SELECT detail

IF NOT TABLEUPDATE(.t.,.t.)
lGoOn = .F.
ENDIF
ENDIF

IF lGoOn
END TRANSACTION
ELSE
ROLLBACK
=TABLEREVERT(.f.,"header")
=TABLEREVERT(.t.,"detail")
ENDIF

You may be confused by the calls to TableRevert() after the Rollback—we were at first, too. What's going on here is that updating is normally a two-step process. The user updates the buffers by making changes (through whatever interface you provide). Then, calls to TableUpdate() copy the changes from the buffers to the actual tables. When you use transactions, you add another layer to that. TableUpdate() copies the changes from the buffers to another set of buffers (call them "transaction buffers") and END TRANSACTION copies from those buffers to the actual tables.

When you issue ROLLBACK, the changes are cleared from the transaction buffers, but the original buffers still contain the changed data. The TableRevert() calls discard those changes. In a real application, you probably wouldn't just give up and revert your changes like that, but would try to solve whatever problems prevented the save, and try again.
hth
MK
 
A good addition.

To state the obvious (or perhaps not so obvious)
When you issue ROLLBACK, the changes are cleared from the transaction buffers, but the original buffers still contain the changed data. The TableRevert() calls discard those changes. In a real application, you probably wouldn't just give up and revert your changes like that, but would try to solve whatever problems prevented the save, and try again.
This doesn't contradict what Dan said. (That's what's perhaps not so obvious). After the rollback you are at the state of buffers before trying to saving. The additonal TABLEREVERTS do revert the edits and as the Hacker Guide says this is evenetually something you wouldn't do, you would rather get the error/conflict leading to the fail of the tableupdate and then let the user choose to revert or force his changes. For example the typical case is another user already changed the records in play. You then can refresh the buffer from the table without reverting all current changes by patching the single fields with CURVAL and SETFLDSTATE to the current state. They then will be no conflict anymore. This is more complicated and this is why you mostly only find such code just taking care of the most general case (as the comment says: * This is only a sketch of the real code, which would probably include various error checking and handling.) What to do in detail in handling a .F. from tableupdate depends much on the business case or rules.


One more important thing is, this code from the hacker guide is showing how to wrap the update of DBFs in a VFP transaction. If you do so for MSSQL Server or any remote backend, you'd also start a manual transaction in that remote database. This is done on top of the VFP transaction. The VFP ROLLBACK still ensures you get the DBF/CCursor buffers back to their state initial to the save operation. The remote transaction assures you revert the backend changes. This is included in the VFP ROLLBACK in case of DBFs, but a VFP ROLLBACK does not rollback remote backend changes. So in case you don't start a remote and local transaction in synch you can revert dbf/cursor changes and still have the remote backend changed as far as the tableupdate worked ok, especially in the mode you apply all changes of a table buffered workarea.

The VFP help toppic "Managing Updates Using Transactions" has a section about "Protecting Remote Updates", which suggests to use the manual transaction mode of the connection via SQLSETPROP of the "Transaction" setting/property to let TABLEUPDATE start a transaction in the remote backend and then use SQLROLLBACK or SQLCOMMIT to finalize the remote transaction, you may also send "BEGIN TRANSACTION" and "COMMMIT" or "ROLLBACK" to MSSQL via SQLEXEC. In MSSQL dialect T-SQL END TRANSACTION is not a valid command, it's COMMIT. The advantage of the SQLSetprop is, you don't need to know the sql dialect of the remote database.

Bye, Olaf.
 
Thank you all. I can see that the guidance is important if table buffering is being used.

At present the application is not using table buffering (although I am happy to change that). If I issue a TableUpdate() command, I get the message :

Function requires row or table buffering mode

There is a function CursorSetProp( “Buffering, n, cAlias) which can switch on buffering in several ways for the table. Do I need to do that for each table immediately after opening it, or is there a way of switching on buffering for all the tables in a data session (or even the whole application)?

Thanks


 
Finally, let's talk about a form with Edit, Save and Undo Buttons. And how to combine them with Table buffering and transactions:

1. Edit:
When you start edit you don't start a transaction, the first thing most developers do here is nothing, but indeed you should now refresh all cursors with the latest data and have a clean,empty buffer, no changes yet. You do so by either just starting the buffering here, or by reverting all eventual changes with TABLEREVERT() and REQUERY() views or cursoradapters and REFRESH the form. This is often the first surprise, normally you assume nothing to do here at this stage, but data might have already changed since the form loaded. SET REFRESH also keeps the form with latest data, but only in BROWSE windows, so don't rely on that mechanism.
As I said you can start buffering here, but in the general case you can keep a form open after save or undo and start another edit session and then the tables can already be in buffering mode, so I'd start buffering with opening of tables or loading data into cursors already, then the first edit mode doesn't differ from later ones and you can do the TABLEREVERT just for any eventualities of any programmatic changes. If you already are sure those programmatic changes need to exist you can keep this or even TABELUPDATE() and store them, eg save primary keys of new records needed for this edit session. So either your edit session does already start with data changed from their original state or you have that saved and start a new session right here.

2. Save:
Here you start a VFP and remote transaction, maybe nested, but that depends on your business need. If you want to save all changes you made in the edit mode or no changes at all, one overall transaction is enough. If you update multiple tables still all referential integrity checks are made right away, neither the DBC nor a remote backend wait for integrity checks til the end of the transaction, to say it clear, you have to store data hierarchical, parent records first, then child ones to have the right and existing foreign keys, though all this is still in the transaction buffer.
If all table updates succeed the whole save was a success and you can commit the remote and local transaction, if not, it depends on your business rules what exactly to do, reverting all changes is obviously a bad move, if a user worked for an hour on the data. In the worst case the remote backend has become unavailable, you might copy workarea data into local DBFs to continue the edit session later.

3. Undo:
Here you can do what I suggested in Edit mode, too, TABLEREVERT all changes and refresh your data from the DBF or remote backend, so you see the current state of data again. The tablereverts typically are more important here, since you expect there have been changes to undo. You can always easily detect, whether a workarea has any change by GO TOP and GETNEXTMODIFIED(0), see help on that, With 0 you signal you want to find the first overall modified record. If there is none, you get 0 as return value, if you get <>0 this is the record number of the modified record. The help also states, why you have to GO TOP first. So besides setting a logical variable with any interactive change event of any field you can detect whether the undo button needs to be enabled by checking this. The edit button can switch from enabled to disabled when it's used, so that state also shows you're in an edit session or just in read mode. Besides this enabling logic you see Undo and Edit might even share the same code, or call a general "RefreshCurrentData" routine.

Especially if your form also involves navigation via navbar buttons or a list of head records you choose and move to, you can have repeated edit modes. You might want to prevent navigation within an edit session or you allow changes of several parent and child data in the same edit session. Everything is possible, if all workareas are in table buffering mode you keep all changes local and decide to save transactional when the user wants to.

Last not leas I already mentioned lengthy edit sessions. You can judge, whether that's probable or not from the complexity of the form. If things are complicated enough you can use the strategy to stor workarea data in local DBFs to have an auto save every 15 Minutes or so, like Word has, and in case of a crash or remote backend failure can offer to restart from that auto save state.

In very short:
0. open all tables/read all remote data into workareas with table buffering

1. start Edit mode: revert (or tableupdate any buffered changes. Then refresh data to most current DBF/backend state, disable this button.
1.1. While in edit mode maybe save in local DBFS for failover (autosave)
2. Save: start transaction, save and commit or revert, maybe save to local DBFs as failover, enable edit button.
3. Undo: revert changes, enable edit button. As you don't start a transaction here, you don't rollback here.

The important thing is, you only start a transaction when saving, so a rollback also only is done there and is for the case the save failed. The Undo only reverts, and revert merely reverts the buffered changes, nothing of that has been seen by the database or any other client.

Bye, Olaf.
 
To answer your latest question:
The form sets a default for buffering in it's Buffermode property, also you can use cursorsetprop for workarea number 0, which makes this the default for every table opened in the current datasession after that is set.

Bye, Olaf.
 
>2. Save: start transaction, save and commit or revert
should be
2. Save: start transaction, save and commit or rollback

Here we really talk of the transaction, not of Tablerevert().
 
Hi,

Furthermore the settings of BUFFERING are scoped to the DATASESSION - hence you may have different settings if you use PRIVATE DATASESSIONs.

And to complete the picture, again from Hacker's Guide ...

...
Built-in buffering is one of many cool features of Visual FoxPro. Now, you can write code that appears to edit fields directly, but really works on copies of the fields. When you turn buffering on (either through form properties or with CursorSetProp()), FoxPro maintains several buffers containing the data in its different states. No more SCATTER MEMVAR or GATHER MEMVAR.

Your code looks like it addresses the fields, but in fact it's really talking to one of these buffers. When the user makes up his mind whether to save or discard his changes, you can either commit the changes to the real table or throw away the buffered changes. One function call (either TableUpdate() or TableRevert()) does the trick in either case.

FoxPro not only gives you a copy of the data to work on, but it keeps a copy of the original data and the current status of the data, in case someone else changes it while you're working. No more making an extra copy to compare with the network.

Visual FoxPro has two buffering modes, each of which can be used in two different ways. The two modes are optimistic and pessimistic, referring to the locking scheme used. With the pessimistic approach, a record is locked as soon as you make any change. That way, no one else can change it until you release the lock by committing or reverting the buffer. With optimistic locking, the record isn't locked until you attempt to commit the changes. You run the risk that someone else will make changes at the same time, but records are kept out of circulation for the shortest possible time. There's no need for a religious war over buffering modes because each has its place. Use pessimistic locking for sensitive changes that must go through. Use optimistic locking for everything else. (Views always use optimistic buffering.)

The other buffering choice is whether to buffer individual records or entire tables. Again, this isn't all-or-nothing. You can use row buffering for some tables and table buffering for others. If you don't specify otherwise, Visual FoxPro uses table buffering when you use a grid for a table, and row buffering for all other tables.

With a buffered table, you can examine the current value of a field, the value it had when you started working with it (OldVal()—actually, the last time you committed it) and the value it has now on the network (CurVal()). You can also get information (GetFldState()) about the status of any field: Have you changed it? Have you deleted the record it belongs to? Is this a new record? And so forth. You can also find all the records that have changed using GetNextModified().

Using the various functions that control all this, you can write code that makes intelligent choices when conflicts arise, and only bothers the user if it doesn't know what to do.

In addition to buffering, Visual FoxPro lets you wrap updates (both local and remote) in transactions. With a transaction, everything you do is tentative. If one part of an update can't be completed, you can roll the whole thing back. No more worrying about adding the invoice header without the details. Wrap it in a transaction, and you save header and details or nothing at all.

hth
MK
 
One thing about pessimistic buffering: As the Hacker Guide says this is rather about the locking than about the buffering behavior. If you'd use pessimisitic modes, this only can lock records in DBFs, remote backends often don't offer row locking, just table locks or page locks, but those are maintained by the RDBMS.

It's therefore a good choice to use optimistic table buffering meaning multi row buffer and no locks, as that works with any backend. No locks mean concurrent changes are possible. If users change different fields of the same record you also don't get any update conflict even without any lock, if you make the conflict detection within changed fields only. That's the fine thing. One user might see an error in the lastname, the other in the firstname, and both can change without any conflict at the same time.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top