Indeed opportunistic locking rather should be named greedy caching and only single users profit from it. It has the nature you describe.
Simplified description: If a user on client A is having the only file handle on a file, he gets an opportunistic lock, which grants exclusive access under the condition that this lock can be broken by any second user also needing access, so it's not a real lock, it'll break when needed. Until that time, all data is cached at the client computer of the opportunisitc locking user. And so at a moment the lock is broken by a request of client B, the file server is serving an old version of the file and needs to ask the client A for changes. The server can't know what portions of the file have changed, so it becomes a relay server to the client A, until all changed blocks are committed and the real file server is up to date again. That is not perfectly working. You can imagine this needs a very stable LAN and in a star topology network the path from a second client B to the file becomes twice as long, as it goes from client B to server to client A. The file server is still responsible for the file, but when getting a read or write request from a second client B, the file server depends on the client A to be responsive to the request to end the lock. If the client is unresponsive, alone that can take extensive time, before the second client gets a first bit of the requested file. Additional to that, all switches and firewalls are not acting ideal for the client A to act as a file server. All the bad properties of a peer to peer LAN are in effect now.
Even if you have write through caching on, client A doesn't write to the dbf file, all changes are cached, even those of END TRANSACTION, INSERT-SQL, UPDATE-SQL, or TABLEUPDATEs. The load on the network is 0, as long as client A is the only one accessing the file. This way It's a quite risky type of caching, too. If the network has outages, you don't ensure data integrity. It can pay with caching documents, when you seldom expect a second user. I still wait for the day, we can configure it per file type.
Many things can go wrong, if file server and client A get out of sync and each have different ideas about the lock status. The file server might think being up to date and serves the old file state. In that situation it's fast but wrong!
The big difference of your setup, Dennis, seems to be installations on a terminal server are having EXE and DBFs on that machine only, users have sessions on it, but that's fine with enough RAM. That's always excluding the LAN component and the file SMB protocol handling the opportunistic lock between EXE and DBFs, the only LAN access is from clients to the terminal server, and that doesn't involve file protocols, it's about the transfer of graphics or the gdiplus or direct 2D commands to redraw them in the same way they were drawn at the server side. That involves a whole different set of protocols like RDP not involved in that caching/locking scheme at all.
Bye, Olaf.