Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Using VFP apps with migrated databases to SQL Server

Status
Not open for further replies.

cmonroy

Programmer
Mar 1, 2021
3
0
0
US
I have this multiuser VFP 9 that has grown uncultivated for the last decade, and the idea for long time has been to migrate all the data to SQL Server. After some time normalizing the old database, the migration is complete, but there is this tiny thing that is resisting (and this probably is another giant can of worms waiting to be opened): how to make the application look at the remote database, instead to the local.

I have a local.dbc, which is where all the local tables are stored. The tables now are successfully migrated to SQL Server. I created a connection that gets an okay response from the SQL Server. Now, I would assume that in where is says "OPEN DATABASE <LOCAL_VFP_DATABASE>", one has to *somehow* instruct FoxPro to look into the MSSQL database and, if so, most of the code would remain unchanged. The table names are the same, as well as the name of the fields... So, the question is: once I have a connection handle to MSSQL, how can I convert this to something akin to OPEN DATABASE <REMOTE_MSSQL_DATABASE>, so I can use the tables at the server?
 

the simplest thing to do would be to replace the local tables with "remote views" into their sql server counterpart; but performance will be horrible and, for multi-user, reckless.

You will probably end up redesigning things to bring locally only the records and columns you need immediately... and then update the server as soon as done.

CursorAdaptors are highly regarded hereabouts. Others recommend SQL Passs-Thru (SPT) with your own wrapper class. I think it was this book had a chapter by Andy Kramek on the latter.

and recent conversations here will be of interest.

n

 
I'm afraid it's not as simple as just pointing to the database on the server and expecting everything to work as before. There are many differences between VFP and SQL Server, both in the language and the data structures. It is certainly possible to transfer your data to SQL Server and access it from VFP - it's something that many of us do - but it is by no means trivial.

Basically, you have three options:

1. Remote views. This is the easiest way, but also the most limited.

2. SQL PassThrough (using SQLEXEC(),etc). This is the most flexible, but will involve a closer knowledge of the working of SQL Server and its language.

3. CursorAdapters. This seems to be the most popular. It combines some, but not all, of the benefits of the other two approaches.

It is not a case of choosing one of the above over the others. Most of us use a combination of two or more.

It is well beyond the scope of a forum post to describe these options in detail. I suggest you read up on them in the Help and elsewhere, and come back when you have specific questions.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Hello,

as mentioned in another thread we converted some apps from dbf to MS SQL.

Basically
Create a database
Create a connection with the wizard, maybe by creating an dsn before with odbcad32.exe in windows.
Create a Remote view and set fields and mapping like varchar(max)->memo
Set send sql updates and set key field in wizard
Now open database
use remoteview, see how long it takes (on real server !)

if you get speed issues :
add filter to your RV and set them to a non existing value before use , so on startup an empty view is returned
Try again open database withremoteview - use remoteview
In program set filters and requery view


We do something similar :
connstring is much more flexible then DSN and have not to be set on each PC, something like
oapp.cconn1 = "Driver={SQL Server Native Client 11.0};Server=ohmyserver\sql2019;Database=ohmydb;Trusted_Connection=yes;app=ohmyapp"

On startup we do "use remoteview connstring(oapp.cConn1) nodata", then we built index like old dbf had in code. These RVs only have fields to be presented to user and are used for showing data only (no send updates), in program we do requery() after user entered filter ,for example date range, data is fetched and index updated automatically.
We also use a remote view for each table with a filter on Primarykey only for Update/Insert/delete, that can only have max. one record

In addition we do oapp.sqlhandle = sqlstringconnect(oapp.cConn1) to store a handle for sqlexec() = SPT

Other things
You may need a tableupdate(...) to make sure updates are sent (we do after each gather)
Be careful with varchar() / varbinary, you may have to set them manually to memo (Bin) in view designer
Take care on date , in MSSQL an empty date is 1900/01/01
Take care on handling NULL, we choosed to set a default in SQL like '' or 0

HTH

regards
tom

P.S.: some details
For date and other conversion we have objects which for a "get" do scatter memo from the RV with pkfilter (see above) and convert for example 19000101 to ctod(" . . ") ,for "save" they do a gather memo and a tableupdate.
For primary keys we use a storedprocedure on mssql which returns an integer per table and adds 1 (used in "save" method), but GUID or other ways work, too. For me this is easier for later use then to get an automatic defined value from sql and easier on debugging then a long guid.


 
Just to add another point ...

We've talked here about the mechanics of connecting to SQL Server. But another very important point is your application's architecture.

If you simply move your data to a back-end server and then access it as if it was a VFP database, you would not get the benefit of a client-server system. That benefit comes about when you limit the data that travels across the network.

To give an example, suppose your application lets the user search for customer data and then browse and/edit that data in a form. You could move the customer table to the back-end, create a remote view to access it, and bind the controls on the form to the fields in the view.

That's fairly simple, and it would work. But it wouldn't give you the benefit of a client-server system. The problem is that, when you open the view, the entire customer table would be transferred from the server to the client app, even if the user is only interested in one or two records.

In a client-server system, you only transfer the data that is actually needed. So the search would be done on the server, and the server would return the single record or records that meet the search criteria; and it would only return the fields that the client is interested in. With much less data being transferred, there is less network traffic and the whole thing runs much faster.

My point is that it is not enough for you simply to move your application to a back-end server. You really need to re-think the way that the data is accessed (and updated). I don't want to discourage you: it is all very do-able. But it's not quite as easy as many people think.

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike (or anyone), Do I have an exception to the small-chunk "rule"? I'll explain.

Before learning about small chunks, I developed an app with a time-consuming (15-30 secs) download of 30,000 records, 60 fields.

If instead of this massive download, I could, with much effort, select smaller pieces of data each time the user selected a different customer. The problem I fear is I would probably need to leave the connection open continuously. I found this causes unwanted disconnects.

Is this normal or is it just my system?

Steve
 
I agree 100% with what Mike said. Now here is the bad news:

There are a lot of VFP SQL Server migration projects that fail in production. The reasons are:

1) Not doing a major rewrite. In addition to designing an SQL Server database, Cursor Adapters, SQL PassThrough, Remote Views (not recommended), a major rewrite of the application will be required. The complexity of the rewrite depends on the complexity of the original application. Thorough tesing will also be required.

2) There is no SQL Server DBA and the developer has no DBA experience. The DBA is responsible for implementing Disaster Recovery plans which includes (in a "Full Recovery Model"):
a) Scheduling and implementing periodic database backups - required.​
b) Scheduling and implementing periodic differential backups - optional.​
c) Scheduling and implementing periodic transaction log backups - required. This is particularly important for reasons that are beyond the scope of this discussion.​
d) The ability to perform "Point-In-Time" recovery which involves restoring database backups (required), differential backups (if taken), and transaction log backups (required).​

In a "Simple Recovery Model":
a) Scheduling and implementing periodic database backups - required.​
b) The ability to restore database backups. There is no "Point-In-Time" recovery which means that data will be lost since the last database backup.​

A DBA has other important responsibilities which are beyond the scope of this discussion.

Note: Any argument that minimizes the above is dangerous thinking.
 
Hello,

@Steve
Loss of connection should not occur in a stable network.
You get the data from the connection into a cursor (all or part).
In Client/server less is better , but anyway you need a connection to requery() or update I think.
We keep connection (shared connection) open all the time and it works fine, with about 50 users and powerful servers on client site resource use does not matter much.
This can bea whole book of its own : connection pooling, shared conn, licences,....

@vernpace
I agree with the need of backups of course.
In MS SQL this can be done with an assistant to do scheduled a full backup - shrink data - shrink log within minutes. You need someone to set this up.
In sysystem with replication or backups during the day this is a little bit more complicated, but then its a job of a "real dba"

Regards
tom


 
Steve,

Leaving the connection open throughout the session vs. opening and closing it for each query - that's a trade-off.

Constantly opening and closing the connection is time-consuming. It could add a significant overhead to each query. On the other hand, many back-end databases, including SQL Server, impose a restriction on the number of simultaneous open connections. If you keep your connection open the whole time, there is a risk that other clients won't be able to get a connection.

My own approach is always to keep the same connection open as much as possible. The limit on the number of connections is configurable (and involves other trade-offs, which I won't go into here), but I have not known it to be a limiting factor.

It's true that there is the risk of dropping a connection, although I have never known this to be a serious problem. My solution would be to trap any errors caused by a missing connection handle, and at that point to get a new connection and then to repeat the command that caused the error. The cost of doing that is small compared to the cost of always opening and closing the connection on each access.

So my advice would be to go for the "small chunk" architecture with the same connection handle the whole time. At least, that's my preferred approach. I accept that your situation might be different.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
tomk3,

With a "Full Recovery Model" database, you do not want to use DBCC SHRINKFILE for a log (LDF) file. This defeats this whole purpose of point-in-time recovery. If this is the case, then change the database to "Simple Recovery Model".

We perform disaster recovery testing for our SQL Server projects and recommend others to do the same. The question is: How much is management willing to stomach with respect to data loss, down time, unhappy customers, etc. when disaster strikes. How many jobs will be on the line if it is not handle appropriately. Unfortunately, there are those who don't worry about those scenarios and are unprepared - and when disaster strikes, well ....
 
Thank you very much, Mike. From the responses it seems ok to leave the connection open, especially since I don't expect many users in this app. I will test this again.

To be honest, I have not noticed any delay by constantly opening and closing the connection. If I keep it open, I will use your suggestion to trap connection errors. Good idea.

Mike said:
I accept that your situation might be different.
Not really. I can redesign this app to chunk small.

FWIW, regarding backups: I'm a believer! Many problems can be cured with a backup. If I have to ask my problem client if they have a backup, I am usually wasting my breath because I know what the answer will probably be (or they wouldn't be calling me).



 
If you simply move your data to a back-end server and then access it as if it was a VFP database, you would not get the benefit of a client-server system. That benefit comes about when you limit the data that travels across the network.

I second this and want to stress out how often I have seen people trying to recycle all legacy code of an app as it worked on DBF using SET FILTER and LOCATE. To recycle that code they often just do views or SPT queries, which fetch all data by

Code:
SELECT * FROM table

Used to replace a USE or a DBF in the DE of forms, this works quite well for small sets of data, but it's like borrowing a whole library just to ensure you find the one book you actually want to read. This is often based on a misunderstanding by not thinking a bit about how the VFP way can work. If you USE and then BROWSE a table, no matter if it just has 100 records or is close to 2GB, you get a grid almost immediately showing the top of the records (no matter whether in physical or index sort order). And you can even use the scrollbar to go to the bottom or anywhere in the DBF and still get a very responsive behavior from the grid, as if all data is local. That's obviously not the case, even considering caching you don't get 2GB of a DBF within milliseconds. You mainly profit from grid behavior (BROWSE is a grid) only fetching rows it displays. Network trraffic recording software can show you.

To get the same from a remote database, you can't rely on the grid to only fetch as much rows as it displays, if you SELECT * FROM TABLE you actually load the amount of data through the network. And then you have the data local, but not indexed. Indexing a query result or view result is possible, but then needs to be redone very time you query data.

So overall, in short: you can't recylcle code. A SET FILTER whould become a WHERE clause so the query only fetches the data the filter allows. When you do a LOCATE you're usually only interested in one record, then only query that one, so all this thinking has to be done in reverse, ie in advance of the query, not as aftermath.

Chriss
 
Another point: A misunderstood compatibility feature of Cursoradapters is to be able to work on a cursor as if it was the DBF. No, but the cursoradapter does more than SPT about reusable code, when you already used the recommended 3 tier architecture with buffering and TABLEUPDATE, REQUERY, etc.

Because a REQUERY on the alias a cursoradaper maintains triggers the corresponding cursorfill or cursorrefresh events of it, so you can work "procedural" with these functions and have a cursoradapter object reacting to that, but that doesn't make a cursoradapter "the table". If you didn't program in 3 tier architecture before you'll not profit very much from this kind of code compatibility of cursoradapters.
The compatibility feature is not to have a cursor, local/remote views and SPT queries also do that, all data fetching ends in a VFP workarea, that's not the main feature, the main feature is the procedural functions trigger object methods, that way you can work without changing code to OOP usage of CA instances.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top