I am being sucked into the murky world of concurrency and locking for a system change.
I have nobody to discuss ideas with so hopefully you guys can help.
I want to confirm my understanding of the top level strategies for dealing with data that can be updated
by multiple users at the same time using the same application. Lets say it is a customer masater file maintenance program that maintains customer data and two different users update the customers and could try and update the same customer at the same time.
I believe that to handle the problems of more than one user trying to update the same data rows at the same time
I have two basic stratagies to chose from.
1) When 'User A' selects a customer in a transaction and tries to update it, a pessimistic locking transaction
will lock out 'User B' from reading/updating the selected rows until 'User A' finishes the update.
If the selected rows are already locked by 'User A' with a pessimistic lock, 'User A' waits before the select is successful.
2) When 'User A' selects a customer in a transaction and tries to update it, an optimistic locking transaction
will not lock out 'User B' from reading/updating the selected rows until 'User A' finishes the update.
The application must therefore check for concurrency errors after 'user A' attempts to apply the update so that
if the selected row for 'User A' haa been updated by 'User B' before 'User A' finishes the update, the applicatation screams
'the original data has changed. The application has reloaded the data and you need to try the update again.'.
I recognise there could be other applications updating the data at the same time with pessimistic or optimistic locking and within
each of these two top leve categories there are variations ( Read Committed, Repeatable Read, Serializable, Read Uncommitted
and Snapshot ) but in terms of coding the application you have to chose the Pessimistic or Optimistic approach.
In the case of a disconnected dataset of customers I would be forced into using optimistic concurrency because once the data is read into the dataset and the connection closed, the lock on the database is lost.
Have I got it right?
Dazed and confused.
Remember.. 'Depression is just anger without enthusiasum'.
I have nobody to discuss ideas with so hopefully you guys can help.
I want to confirm my understanding of the top level strategies for dealing with data that can be updated
by multiple users at the same time using the same application. Lets say it is a customer masater file maintenance program that maintains customer data and two different users update the customers and could try and update the same customer at the same time.
I believe that to handle the problems of more than one user trying to update the same data rows at the same time
I have two basic stratagies to chose from.
1) When 'User A' selects a customer in a transaction and tries to update it, a pessimistic locking transaction
will lock out 'User B' from reading/updating the selected rows until 'User A' finishes the update.
If the selected rows are already locked by 'User A' with a pessimistic lock, 'User A' waits before the select is successful.
2) When 'User A' selects a customer in a transaction and tries to update it, an optimistic locking transaction
will not lock out 'User B' from reading/updating the selected rows until 'User A' finishes the update.
The application must therefore check for concurrency errors after 'user A' attempts to apply the update so that
if the selected row for 'User A' haa been updated by 'User B' before 'User A' finishes the update, the applicatation screams
'the original data has changed. The application has reloaded the data and you need to try the update again.'.
I recognise there could be other applications updating the data at the same time with pessimistic or optimistic locking and within
each of these two top leve categories there are variations ( Read Committed, Repeatable Read, Serializable, Read Uncommitted
and Snapshot ) but in terms of coding the application you have to chose the Pessimistic or Optimistic approach.
In the case of a disconnected dataset of customers I would be forced into using optimistic concurrency because once the data is read into the dataset and the connection closed, the lock on the database is lost.
Have I got it right?
Dazed and confused.
Remember.. 'Depression is just anger without enthusiasum'.