Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Server - Client Side

Status
Not open for further replies.

skoko

Programmer
Feb 11, 2002
188
US
Situation is like this.

Users use local application and local server database. So, we have five same databases (on five servers).

Standalone application is on 6th server with timer on 1 second for checking local table for new updates. When user make changes, application save that and send to server (stand-alone) which record is changed (timestamp, table, server path and record number). Server Replicate this record to other servers. Work perfectly. Client side is in real time. Server side takes 2 seconds to replicate record to other servers.

Bad thing is adding record becuase I'm using recno() to locate record.

Let say, user X add new record at XX:XX:XX and send this information to server. Record number is 17736.
Next second, another user add new record, but, again, new record is still 17736. Now, we have situation: same record number, different records.

I hope that somebody understand me :)))))
 
HI SOko,
For your approach, going with record number is not the correct way. YOu should generate a ID based on the client servers (five of the clients side servers) and use that as the record locator. That could be..
Server1..
"01"+ALLT(STR(recno()))
Server2
"02"+ALLT(STR(recno()))

so on. For corrections, you should seek the record with the locator searched as above and update.

:) ramani :)
(Subramanian.G),FoxAcc, ramani_g@yahoo.com
 
Yep, but, implementing seek gonna slow down.
 
Ramani, even that not gonna work because delete and pack, can happend to get same recno().

Maybe something like:
datetime()+server+seconds().
 
I'm accessing directly to table.

USE \\client_I\table.dbf
go XXX
SCATER NAME some

USE \\client_II\table.dbf
go XXX
GATHER NAME some

...
 
To enhance and explain what Ramani said (Because I think a few misunderstood), what he means is: When the new record is entered on any of the 5 "clients", a field on the record (called "ID" or "PersonID" or "OrderID" or whatever is appropriate) should be filled with a string made up of the "client" number ( "01", "02", "03", "04" or "05" ) and STR() of the RecNo().

Now, Packing a client after some records get deleted COULD result in dups resulting from that client. So instead of each client using the RecNo(), the program session on each client should start by finding the highest number used by that client, and incrementing from there for each new record.

The other way to do it (if the local app's are using File access to the local servers instead of a single program on the local server doing all data access) is to maintain a NextID table on the Local servers. These tables only guarantee a new unique ID on the local server but then pre-pend the server Number (as Ramani suggested) to make it unique among ALL the servers. There are many examples around of PROCEURE NextID including all Multi-user locking to avoid contention and to make sure each local app gets a unique number.

BTW: Skoko, you have my sympathy... it sounds like a challenging system! ;)
 
fdinel, new record is flushed immediately, but, server need some time for replicating, we have 200 users, so, we can't depend on that.

wgcs, I create stand-alone application to do replication with reason - client side speed, because, table's r very big.

Now, I think it is very easy to find uniqueID even without searching to check. Like I say: TTOC(DATETIME(),1)+STR(SECONDS(),10,4) or something like that.

My question is: can we do that without seek, locate, or any type of searching. Maybe winsock can help.

BTW, yes, I'm very exciting about project, because, everything is maden under VFP and have all chances to be in real-time, for all 5 cities.
 
I don't understand why you don't want to use SEEK. Why do you think it's slow? If you have a primary index set on that field, a SEEK is nearly instantaneous. It's only noticably slower than GOTO if you're adding thousands of records per second. With only 200 users, I doubt you're coming anywhere near that.

Ian
 
With GOTO, replicating is done in 2.5 second, with SEEK in 43 seconds.
 
So, definitely your seek is not optimized, as far as I know seek is the fastest way ever – used properly – to locate record in DBF table. The binary search algorithm if it has the right index will take you to your record in much less than this 45 sec. this is my real time experience with hetrogenous tables. Can you please explain your client-server architecture more, what is the communication media among all these system LAN, WAN, Internet, etc..

Why all the updates have to be done locally and then sync all of them later?!
Did you choose this design on purpose for some reason?!

Walid Magd
Engwam@Hotmail.com
 
SEEK it is fast, u r true here, but, if u open table without index table that is 2.5 seconds. If table is opened with index key time for that is 45 seconds, sure, after that is less than seconds per search, but, still we have 45 seconds opening time.
 
Wait...OPENING the table takes 45 seconds with an index, 2.5 seconds without? That sounds really odd.

The only thing I know of that causes that is if you have a lot of deleted records, and SET DELETED ON. When the table is opened, the system will search for the first undeleted record, halting your program until it finds one. If this is the case, try SET DELETED OFF. That should fix you. Opening a table should have no measurable delay at all.

Ian
 
Even with SET DELETE OFF take 2 much time.

p.s. I'm talking about USE to remote network drive (100mps).
 
So it is the USE command that is taking so long?

Are you using the table directly with a mapped network drive? Or are you using a Remote View?

Are you opening the table, writing the information, and closing the table again afterward? If so, why? You would eliminate the overhead if you just opened the table when the app starts and left it open until the app is shut down.

Ian
 
On another thought...this is actually a suggestion in your original line of thought.

To fix the problem of multiple users submitting the same record number at the same time, have the master server (#6) keep track of the true record number. Whenever it receives a new record, it appends it and writes in its ACTUAL record number. So, even though 2 different servers submitted 17736, the second one would actually be numbered 17737. It then propagates the changes to the other servers, and the inconsistency is corrected within 2 seconds.

Would that work?

Ian
 
Hm, if two servers submitted 17736 that means that already both servers append 17736, how second one be numbered 17737?
 
Both servers would submit 17736 to the master server. The master server would receive them, renumber them according to their REAL record numbers (so the second one would become 17737) and then broadcast to the rest of the servers the new records. The sub-servers would then all overwrite 17736 and 17737 with the correct information.

Ian
 
Cool, that will be ideal, but how to renumber?
 
Your master server, upon receiving the new records, should insert them into the master table, then REPLACE nRecno WITH RECNO().

You would need an actual field for the record number to be able to do this. In my example above, "nRecno" is the record number field.

Ian
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top