Sir Arthur Conan Doyle,
The Naval Treaty:
The Memoirs of Sherlock Holmes
What we are dealing with here is protection against conflicting updates that might break the business rules, and corrupt the data. Of course, this is a not very new problem, and has often been faced in computing before. Clearly even Shakespeare faced this problem, and was awakened by nightmares of inappropriate data-locking strategies.
Some data changes are all or nothing, replacement changes: I look at a customer record, and I change the surname. Some are not: I look at a balance figure on an account record, and change it with a financial transaction. The change this time is incremental; it uses the existing data to help make the change. If the existing data is the wrong data, the resultant change will be wrong. So if another ordering clerk committed an order for that customer faster than I did, my transaction cannot be allowed to complete.
A pessimistic scenario
I or the other user should be stopped from starting a transaction, while the other one is (potentially) already in a transaction. Good-only one working on the data at a time. Bad-only one can work on the data at a time. Imagine a system with three major customer accounts. Each customer has many different people who can phone in orders at the same time, and we have many ordering clerks to allow them to do this. Imagine ten orders for one of our large corporate customers being taken concurrently by ten order clerks. The first one in can lock the data; the other nine must wait. The result? Frustration.
An optimistic scenario
Let's take the same task: ten orders are being taken concurrently. For each order, we read the data and get some sort of indicator of where we are in sequence. All our order clerks go for it-hell-for-leather Online Transaction Processing (OLTP). The first process to complete an order checks to see if the sequence token it is holding allows it to commit the changes. The token says "Yes, you are in the correct sequence to change the data," so the transaction commits. The second order process checks its sequence token and finds that the data has changed. The second order process than has to check the data again, to see if it is going to break the business rules that apply now that the server data has changed. If no rules are broken, the process commits the order. If committing the order would break the rules, the process has to make a decision about which value should be in the database (the value that is there now, the value that was there before the first order was committed, or the value after this second order is committed). Otherwise the process has to refer the decision back to the second ordering clerk. The choices include: making the change, rolling back the change it was going to make and thus starting all over again (leaving the data as it was after the first transaction), or even forcing the first transaction to rollback too, so that the data is as it was before any ordering clerks got to work on it.
Optimistic or pessimistic?
What sort of data is it? Some fields, or even records, might be the sort of data in which no business rules can apply. In that case the last update must always be the one that defines what data is now in the database. It might be that currently there are no rules, in which case there can be no incorrect sequence. It can be that the data is not volatile or only rarely worked on concurrently. Usually, if you must lock, you must also strive to minimize the time you are locking, and avoid locking over user interactions if you can.