Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP migration path, MS SQL anyone ? 4

Status
Not open for further replies.

Steve Yu

Programmer
Nov 6, 2021
72
0
6
US
Hello,

We have a 30-year old character based package coded in xBase and ported through various VFP versions to 9.0 currently, with little coding changes;
no forms, no popup menus, just plain old @ say get.
It has been heavily modified with myriad of features and enhancements and it performs very well.
Then Microsoft dropped support of VFP (I know it's years ago, but time flies).
Who cares you might ask; I just saw Window XP running the X-ray machine in my doctor's office the other day.
The package is too valuable and we are too dependent on it to risk it being discontinued and suddenly stops working in one day.
So we have started searching.
We have evaluated many 'modern' packages, including Netsuite and a close cousin AccountMate that's been ported from character mode to VFP forms, for our possible migration replacement. But none of them are suitable.
Two factors we are considering: backend data base and front end user interface whereas our current system is tightly coupled two in one.
We have concluded the evaluation and have begun the migration process by planning to deploy MS SQL as backend AND keeping the character based interface by replacing VFP seek/locate/replace/etc commands with VFP SQL path through commands.
It's going to be tons of rewrites and testing but we simply have had too much customization to start over with a canned package with modern interface and all the bell and whistles that we might not want or need.
I'd like to ask for comments and suggestions.
Thank ahead.

Steve Yu
 
Steve,

You asked for comments and suggestions. My two main comments would be:

- Moving your data to Microsoft SQL Server (to give it its correct name) - or to any other back-end database system - will bring several benefits.

- But the exercise will be far from trivial, with a long learning curve and many pitfalls.

Also, don't make the mistake of thinking that moving the data to a back-end will solve the problems that you are facing by using old dBASE code. A better strategy might be to move the front-end to a modern Visual FoxPro environment first, and moving the data later. In particular, the issues you have mentioned, such as the use of ON KEY LABEL, etc. will not themselves be solved by changing where you store the data.

Among the benefits of moving to SQL Server:

- Much better security that you can hope to get with DBFs.

- Better performance (but only if you redesign the application to take advantage of the client-server model).

- Ability to centralise many admin tasks.

- Easier to share the data across programming languages, applications and environments.

Among the issues you will face:

- Differences in the commands and functions you use to access the data (VFP SQL vs. T-SQL).

- Differences in data types (including NULL support).

- The need to re-design your data access to take advantage of client-server.

- The (not inconsiderable) cost of SQL Server licences.

Those are just a few quick comment off the top of my head. No doubt other forum members will give you useful advice, which we can discuss in more detail.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Hello,

we did it like that :
create an sql db and a foxpro dbc, add remote views (rv) with the same name as the dbfs.
(we created a tool to do that for a directory with dbf/cdx, it also creates the startup program with open/index) and gives you a genneral handle for SPT (Sql pass through, returns a cursor)

In startupprogram open dbc and rvs, create index same as dbf.
Now your program should work as before (do not mention speed for now) except dates :
You have to implement some logic for empty dates, in MS SQL empty is 01.01.1900
(in our "datalayer" a dataobject does that on scatter)

Maybe you have to add a tableupdate() after each gather / append just to be sure.
Maybe you add error handling, there is no locking like dbfs.

To optimize set filters to the remote views (parametrized views) for bigger amount of data.
We use the RVs for small data (like adresstype which only has some hundred records) for view / select / edit (new, delete)
For bigger ones we ask the user for a filter (from / to, group, ...) and create a cursor via SPT to present data.
For saving / append / delete we use an RV with a filter on primary key (so max = 1 record)

I know RV are often called old fashioned but I never found an advantage to use cursor adapters (for us)
By opening it like "use rv_name nodata connstring(oapp.csqlconn)" with simply changing the connstring you can easily change server , DB ,...
(requery when necessary like on filters changed)

Regards
tom

 
Mike Lewis,
Thanks for the info.
I'm and have been a life long xBase programmer and is of the opinion that VFP provides the best platform/environment/tools for a programmer;
It's heaven compared to my first program project when I had to punch my code onto hard cards, hand the deck of cards over to the operator to load into the main frame in the back room and come back in the afternoon to get my print out. And VFP is also better than all the other platforms that I had worked throughout my career - IBM's AIX (vi as editor), HP 3000 COBOL (ISAM access), Word Star with dBase II/III. Not sure how many people can relate to any more.
That being said, VFP is heading into the sunset and this project of ours is to find the next 'modern' system for replacement.
The MS SQL Server will accomplish half of our goals by supplanting the DBF and Ndx/Cdx; and it is supposed to be more secure, have more capacity, etc so there should not be a debate on this move. And it opens up possibility of expanding our web presence - we will no longer be limited by our current VFP based web coding (Westwind Web Connection).
The front end is a totally different story.
Our reasons for not considering a newer GUI based system, built in house or otherwise: (our legacy system is character based 80x25 @ say get all the way)
1. everyone likes it (for 30+ years) and it is faster than point and click and it gets the job done very efficiently
2. we have tons of customizations that will have to be reprogrammed, should we change the front end or move to another package such as Netsuite for example (that is if Netsuite allows custom changes at all).
3. MS SQL server license cost is OK, but not the cost of new packages on the market; we are talking about quarter of a million bucks easy when it's all said and done; and that's just the initial investment.
It is a tough choice to stick with our current xBase code.

Steve Yu
 
Tomk3,

We are concerned about the speed of SPT.
Our largest files may have a couple of million records.
As everybody is aware, you don't want to use set filter on files this size (if not true, can someone tell me what I did wrong).
Most of these large files have permanent indexes which are refreshed once a month; so we use seek as a rule for access, which is very fast.
How fast or slow would SPT compare to the SEEK on large files ?
We have literally hundreds and thousands of SEEK commands in our xBase code; would SPT work as fast where only one or no record will be returned, or I'll have to fine tune the dbo.table settings ?
Second question, is you current system VFP FORM based ?
Not sure I can add a remote view and then all my existing character based code would work without much change on the new MS SQL tables.
We don't have dbc or RV, just plain dbf and ndx/cdx.
Regards,

Steve Yu

 
Steve,

I mentioned that SQL Server will give you better performance only if you redesign the application to take advantage of the client-server model. Would you allow me to elaborate on that.

A typical dBASE / Foxpro application often has "VCR-style" forms. The user opens, say, a Customer form. This immediately displays all the relevant details of the first customer in the table. The user can then use a set of buttons (or perhaps keystrokes) to move forward or backward through the customers one record at a time, to jump to the first or last customer, or to search for a customer.

A slightly different model is where the form contains a grid. The grid shows the basic details of all the customers (e.g. customer ID, customer name). As the user steps through the grid, the other controls on the form show the full details for the selected customer.

Both those systems work fine in a traditional DBF-based application. But if you were to port that application unchanged to a back-end database, you would lose much of the benefit of a client-server architecture.

In fact, as Tom suggested (above), this is not really an issue with small tables, but when you have many tens of thousands of customers (or whatever), the client-server approach will give you a big reduction in network traffic, which not only speeds up your own application but also the network in general.

Client-server works best when the client application only retrieves from the server the minimum information that it needs at any given time - not a byte more. With the type of forms I described above, you would have to retrieve the entire client table, even though the user might only be interested in a given client at that moment. In a client-server system, it would be better to let the user search for a customer before opening the form - perhaps from a separate search form, or a toolbar, or even - at a pinch - a popup invoked by ON KEY LABEL. Either way, the system would then retrieve only the required data for that one customer, and display it in the Customer form. If the user then wants to see a different customer, they would open another instance of the form.

The point I'm trying to make is that by simply moving your data from DBF tables to tables in, say, SQL Server, you won't get the full benefit of the migration. You will need to re-think many aspects of your application. (Of course, there are other benefits of SQL Server besides performance that migrating your data will bring about.)

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike Yearwood,
Thanks for the comment.
Our situation is a little different.
The 80x25 user interface works fine for users; the system has evolved every step of the way over the years; we can now talk real time with trading partners such as Amazon, Wayfair, Walmart, etc as well as UPS,FedEx and many more; no manual import of Excel order sheet or typing, all automated. And we move over 200 containers of good a single week in three US locations, all supported by the core modules of this xBase code.
The issue is first we are afraid VFP would one day stop working; and second, it is very hard to find VFP programmer nowadays. Very few younger people in the IT fields even knows about VFP.
It is sad but it is also true.

Steve Yu
 
Mike Yearwood,

When you said 'you have a big difference coming between SPT and seeks' did you mean SPT is faster or slower, and why ?

Steve Yu
 
Steve,

Thousands of FoxPro applications have seen their demise over the years because of failure to upgrade to new language features and related technologies. Some big companies too. CountryWide Financial developed one of the worlds largest FoxPro applications - starting with 2.6 and moving to VFP9, they kept their @ SAYs | GETs like you. They also neglected to migrate to SQL Server like you. Bank of America bought CountryWide during the financial crisis in 2008. I was hired to assist in the migration of the codebase. I have one word to say about the project: NIGHTMARE.

What was the most startling was that most of the FoxPro programmers had no knowlege of the new features of VFP9, VFP8, or even VFP7. One time, I suggested the use of BINDEVENT(), BINDWHA? they said. It is our responsibility as programmers to keep up with new features of a programming language and other technologies - it takes time and sacrifice. In 2016, Bank of America replaced the VFP application with one written in C# with an SQL Server backend. It took 3-4 years and millions of dollars to accomplish.

More often than not, the best solution is to tear down the old application and redesign and code anew. If you and your programmers lack the talent or don't have the resources to hire the right talent, then be prepared for the demise of your application.

As far as frameworks go, I looked at some of them many years ago: YAG's CodeBook, Feltman's Visual FoxExpress, and Speedie's Visual MaxFrame. While I learned some things, they were ALL unduly complex and inflexible. As a result, I designed and coded my own proprietary framework some twenty years ago. That said, if you do proceed with your endeavor, you will have to design some reusable components - particularly for SQL Server: a CRUD API. You may find that you will eventually create the beginnings of your own framework.
 
Thousands of FoxPro applications have seen their demise over the years ... They also neglected to migrate to SQL Server like you.

Vernspace, sorry but I have to take issue with that comment. There are certainly good reasons to migrate your data to a "real" back-end system like SQL Server, but - in many cases - it is not necessarily essential for survival. I know of many applications that were originally developed in dBASE, Foxbase or pre-Visual Foxpro, that are still using DBFs and are still going strong. These are applications where a high level of data security isn't needed, and in where performance is adequate for the needs of the users.

In my experience, a more common cause of the demise of old applications is their failure to be updated to a modern user interface. Older applications tend to have a modular modal interface, allowing only one form to be open or one task to be performed at a time. Users today expect better than that.

And why do you specify SQL Server as the target for a migration? There are many other client-server databases on the market which work just as well as a back-end to VFP. They have their pros and cons, and SQL Server is one of the better ones, but it is not the only show in town.

Steve, I go back to what I said before. Don't assume that a migration to a client-server system will solve your problems. From what you have told us of your application, I suggest you focus instead of bringing it more into line with the Visual Foxpro way of doing things.

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Hello Steve,

with filter I mean the where clause in an sqlstatement / Remote view. That is not like a filter in DBF, in SQL it reduces the amount of data to fetch (wich is "THE" goal in client/server)

The speed of SPT for search and retrieve depends on many things, for example Index on sql-server, precompile , fields in resultset, network speed, sqlserver hardware,...
Anyway, for seek and retrieving one record to be edited (see below) the speed is so fast that in real world we/our users never notified a difference (if you have the right index in sql,...) , usually it cannot even be measured, its fractions of a second, see Mike Y.'s example.

In the architecture I tried to explain its like this :
The form has some fields to enter filtercriteria (which will be in the where clause) like date from / to, customer , orderno,...
When the user leaves the last filter input or presses update we create the filter (where clauses) and send spt which will create a cursor , then we show the matching records in a grid in that form ("safe select").
(Instead of SPT a RV with same filtercriteria in definition may do, too. For us, SPT is easier and can be tested easily in SQL Studio.)

Then the user clicks one (or changes filtervalues) , an rv with primary key as parameter (filter, where) will fetch that one record and user edit/delete it with takleupdate rv the changes will be applied to sql db. This action (get one record / save / delete one record) is usually too fast to measure it correctly.

It does not matter whether you use a form or @say (oh I remember Clipper 5.3 and sometimes miss it) , as mentioned you should in C/S reduce the amount of data to fetch/send. Maybe you even have something similar to let the user enter the values for the seek.

Concerning frameworks :
We tested some many years ago and then created our own "set of classes" for GUI (exspecially search and edit forms) / datalayer and business logic / output.
So I cannot give a recommendation.


Regards
tom

Maybe you have a look on xbase++, I never used it myself, they claim to offer amn easy migration to sql. Hmm, but there is a demo
 
Vernspace and Mike Lewis,

Excellent argument and comments.
Vernspace,
Can you tell you what approach BOA took initially to migrate the VFP code ?
Did they try to convert '@ say get ...' to VFP forms on front end ? What was the backend design ?
Yes, I'm one of those guys who would say WHAT? to BINDEVENT(), and still do.

Mike Lewis,

I'm too old to learn to work with the GUI, to be honest.
I did create one form module to run on Apple Ipad minis, with touch screen input, to track container movements and provide real time update to staff in office. We push over 200 containers in a single week through our 3 warehouses.

I also beg to differ on reasons why old VFP systems like ours will fail if not kept up to date.
1. the modal approach, UI: one task at a time, sequential processing actually works better for us than the click and point anywhere model: each department staff is dedicated to one task, orders in sales have to be types in set sequence, same in A/P and purchase. You enter the customer/vendor code first before you proceed to line items, etc. To multi-task, people would fire up multiple window sessions, each with a different function and toggle back and forth. The F1 inventory popup is another multi-task tool that's essential. We have evaluated many 'modern' packages and never found one better.
2. Without any object-oriented coding, we are now capable of talking real time with Amazon,Walmart, and hosts of other trading partners, sending and receiving SO/PO via various protocols (FTP, HTTP, JSON, Java, XML etc). This is all done in simple .prg's running as scheduled tasks 24 hours a day.
Sales staff can pull scanned copies of checks, packing lists from LaserFiche, a document management package, with ease. We have real time links to pull shipping labels from UPS,FedEx to match orders from Amazon,etc. There is no limit as to what the xBase code can do.
Our system performs well with fairly heavy usage load.

As Mike mentioned, we don't have security concern and the performance is good.
So why should we consider migration ?
Even as we move toward that goal, I'm still wondering if we should do it.

Regards,

Steve Yu









 
TOMK3,

Thanks for the insight.
My concern about the RV and tableupdate is concurrent update.
As I understand there is no locking in MS SQL.
How do you handle updating, for example, the next sequential SO number where dozens of sales people are entering and getting a new SO number the same time ?

Steve Yu
 
Mike,

Steve indicated that he wants to migrate to SQL Server: "The MS SQL Server will accomplish half of our goals by supplanting the DBF and Ndx/Cdx; and it is supposed to be more secure, have more capacity, etc so there should not be a debate on this move. And it opens up possibility of expanding our web presence - we will no longer be limited by our current VFP based web coding (Westwind Web Connection)." And the fact that he mentioned "millions of records" ... DBFs are not the way to go here. So I have to disagree with your disagreement. My post suggested a total rewrite which would include a modern frontend - I agree with you here.

Steve,

If you are set on using SQL server, there are other things to consider that VFP programmers don't talk about much: Are you planning on hiring a DBA? SQL Server requires maintenance plans for backup and restore operations. The decision on which Recovery Model to use is also important. I will not go into all the details here, but I suggest you do more homework. If you don't want to use DBA's, then you will have to code your own maintenance plans and rigorously test them. This is not trivial.

 
Steve,

At BofA we did not migrate the application to SQL server with a new interface. The project was too daunting and management wanted to dump VFP. Instead, we made modifications to meet new government regulations passed by Congress (the Dodd-Frank Act) as a response to the financial crisis. Forgot to mention: The application originated and fulfilled all home loans for BofA. It was huge. We had over 30 VFP programmers.
 
Steve,

i have come a similar journey (albeit starting with fpw26) and didn't have the time or money to freeze the fpw26 application while i worked on a rewrite into something newer; instead i converted @say .sprs to VFP forms one or two at a time as needed or saw fit at the time. It was surprisingly easy to flip between an .spr's foundation read and a VFP Form's read events as required within the same .exe. After a few years everything had been converted to vfp forms. This approach precluded any frameworks (i assumed) but i soon had my own set of classes to make things easier.

Then about 5 years ago i started the move to MariaDB back-end servers (because no MS licensing costs and MariaDB rather then MySql because it didn't have as many reserved words to clash with my existing column names).

If you are adding new forms and new tables they could potentially be server based while you keep existing .sprs/forms using LAN dbfs; but in my case it was a fork in the code and wasn't ready till i had changed all the code to use my own SPT handler class and server data. As has been said... it's not a case of SELECT * the entire table from the server into local cursors and then keep the old seeks etc. But i was able in the first instance to replace like for like within each .prg or method and then refactor and improve performance within the new paradigm (i'd kept the table structure indentical between dbf and server).

When converting to forms i used my own data classes to addnew/fetch/save/delete and bind the required data. A data layer with a business-logic layer on top and it was then fairly easy to change those classes to use the mariaDB server instead of a local dbf without changing the forms or business logic. That SPT handler class is back-end agnostic and in fact early testing was using MS SQL Server till the switch to MariaDB was decided.

I don't disagree with what others have said/warned but this, er, "middle-path" worked for me.


hth

n
 
Hello,

primary keys
we use a table systable (tablename c(50), lastkey Int) for primary keys , a stored procedure returns a new int value to vfp and saves it to systable, separate for each table. We tested with 100.000 simultaneously on 55 pCs, no problem, MS SQL handles it fine.
We do this because it makes merging offices easy , on delivery we set a different offset in each office.
And yes there is a field fk_office now in newer programs, but anyway its easier this way, just an insert from other server. And we are lazy :), we have this way since ages , its extremely fast and "never touch a running system"
So concerning locking there is no problem for pks (see above).

Locking
For other tables you can do that with transactions, but we choose another way because the only place where users can edit the same record is on base data (customer adress, ...) or activating an order and we want to inform WHO locks a record, even on rdp systems.
For that (very rare case) we have a table lb ("lockedby") (pk_lb, locktable c(50) , fk_table I , userid I, lstart datetime (UTC). if a user enters a protected record its checked whether there is an entry for that fk, if yes, user will be informed who and when "locked" it, if no an entry is created.
On leaving the record the entry is deleted, in appquit() , in startup() and in error handling all entries of a user are deleted. And there is an emergency tool for the admin and code to delete old entry (leaving the app would be enough, too)


Regards
tom

Conversion / start
If you like I can upload our convert2sql, it takes a dir of dbc/cdx , drops and creates an sql DB, creates all tables and indices, optionally transfers all data , creates a vfp dbc with basic rvs and creates code to open / create index for them (so you can use and seek, see my first post) / create a public handle for SPT.
But its an internal tool, no Gui, values must be set in code , like servername, how to identify pks, which tables to ignore (like foxuser), ....

I prepared a sample to help a user during VFEST2021. Many people helped me (exspecialy on Unicode and REST with VFP), so I want to give something back. Its old, no oop, no GUI, but its small and works fine. Of course you have to optimize the views with parameters (see other posts), but it may be a start.
 
TOMk3 and Mke Yearwood,

Thanks for the examples and elaboration.
However I'm still in the cloud as to how locking, or lack of, is handled in MS SQL (via VFP SPT).
In a client server setting, it seems to me, every one gets a copy of the table, works on the content locally; the live table on the server could very well be outdated when he/she is ready to update the server. How does the server resolve the conflicts when say one hundred people are accessing the same table the same time ? (online concert ticket sale, for example)
Concurrent access happens almost everywhere in our application: assigning next SO#, updating allocated qty when orders are placed on given item, customer balance when an invoice is paid, and myriad of instances throughout the day by dozens of busy staff.
The legacy xBase code handles these cases easily, for example:

to get the next sequential SO #:
use control
locate for sysid = 'SO01' && very small dbf file, a couple of dozens records
rlock()
replace nextso with nextso + 1
unlock
msono = str(nextso,8)
There you have it, the next order number, ready to be inserted into the order table.
No one has to wait a second or risk being rejected while trying to save a new order.

Regards,

Steve Yu


 
Steve,

SQL Server has technology called "Snapshot Isolation Levels". Very powerfull stuff. See here:
Note: Assuming you have certain db permissions, you can execute any TSQL via SPT. Hint: Best way to implement SPT is by using the VFP TEXT...ENDTEXT Command to construct statements. If you decide to go the SQL Server route, there are a lot of things to learn. You will have to learn (and experiment) a lot on your own. There is NO WAY you will learn it piece meal here - forget about it.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top