Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Westi on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Replicating VFP data to SQL through triggers

Status
Not open for further replies.

SGLong

Programmer
Jun 6, 2000
405
US
We want to start taking advantage of SSRS (SQL Server Reporting Services) but the bulk of our data is in VFP tables. I was thinking that if we created SQL counterparts to all of our tables we could use VFP Triggers to keep the SQL table synchronized with its VFP counterpart. This would allow us to start using SSRS right away and gradually repoint all of our VFP code away from the DBF's to the SQL tables. Many of the small tables have already been transferred over to SQL and are being accessed via remote views, but the really big tables will need to have all the programs touched to maintain efficiency.

Unfortunately the documentation on triggers in VFP is not very helpful. Can someone give me a code samples of how we would Insert, Update and Delete a record into the sql table from the DBF?

Thanks

Steve
 
First I think it would make sense if you reimplement your application to not rebuild the same database structure in SQL. You also may want to change your strategy to create primary keys from Integer IDs to GUIDs, take advantage of sql server specifics.

That warning aside, you could of course start using SSRS with duplicated data, it's not the worst idea to start from there.

Your VFP tables need to be part of a DBC database container to be able to have insert/update/delete trigger at all. I you have free tables so far, it would be the first step to create a DBC and add free tables with ADD TABLE.

If you already have a DBC and defined referential integrity rules most tables will already have definitions for some triggers, depending on the rules applied.

If you open a table in the table designer you can see assigned trigger calls in the table tab in the trigger setion. (By the way you also see that tab an section for free tables, but it's disabled and readonly.)

If trigger calls are already defined you can either modify the stored proc and make a call of a subroutine within the currently called trigger, or simply make the new trigger expression [oldcall() .AND. yourtriggerfunction()].

You can add trigger calls by the CREATE TRIGGER command, eg
Code:
CREATE TRIGGER ON yourtable FOR INSERT as SQLInsert()

No matter what trigger you use, the common assertions that can be made: The current ALIAS() is the table inserted into,updated or deleted from and the current record is the inserted, updated or deleted record and has all the latest values in it's fields already.

So you could really create a generic function for all tables to sync with SQL Server.

What's not that easy is to map some field types like GENERAL field or BLOB and so it nevertheless is easier to create specific trigger functions for each table/operation combination.

Bye, Olaf.
 
Thanks, Olaf. The tables are already part of the DBC so that's not a problem. I'm concerned about two issues - implementation and performance.

Regarding implementation, if there are 15 DBF's to be replicated will I be better off creating 45 stored procedures (an Insert, Delete & Update for each table) or should I just create 3 and parameter drive them. Either way I believe I have to enumerate all of the fields in the SP.

Regarding performance, in several of our tables it's not unusual for one of our procedures to update 300+ records at a time. With native VFP tables and indexes that happens very quickly. Will I notice a performance hit by using triggers to update the SQL counterparts?

Thanks again,

Steve
 
I'd opt for the dedicated stored procs instead of 3 single paramter driven ones.

If you do referential integrity even SQL Update affecting several hundred record or APPEND append hundreds of records will trigger the update/insert trigger for each single record.

Yes, it's a performance hit. But it's surely the easiest to have centralized replication, which does not overlook any case opposed to doing it in the application logic.

If it's not that important, that changes to tables are replicated on the fly, you could also split the task. With the trigger procedures just create a job log of TABLE, RECNO or ID and OPERATION (Update, Insert or Delete), which is enough info to later find the record and it's field values to replicate.


You can process this job log each 5 minutes or whatever suits you, instead of doing each operation at once. Thereby you can optimize eg an insert plus an update to an insert only with the final values of the record, instead of making two operations on the sql server. For each record finally being deleted it's sufficient to delete it and if there is insert and delete you simply need to do nothing at all to the server.

As you can process the job log with a different exe that will perhaps mean less performance hit to your application, even if there isn't much optimization of successive changes to the same records.

Bye, Olaf.
 
plus you can group changes of several records in the same table as one roundtrip to sql server by using an updateable SPT cursor you fetch from sql with all old records affected and you replay the needed inserts/updates/deletes an to commit them with a single Tableupdate(). which might be the best performance gain.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top