Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

State of VFP, is DBF a viable platform going forward, or SQL server is better 3

Status
Not open for further replies.

Steve Yu

Programmer
Nov 6, 2021
85
US
Hello,
Very glad I found this forum where VFP is still actively discussed and seems people still actively working with the technology.
Having delayed our migration plan many years (don't fix something that's not broken), we finally decided it's time to move forward.
Questions and concerns:
1. SQL could be a better data depository, however DBF has served us well; has anyone here had issues with DBF, mainly corrupted structures ?
we've had corrupted ndx and cdx once for a while, but those can be easily repaired without loss of data; and we stay away from memo and/or binary fields and set filter as much as possible. DBF used to crash often when power goes off on MS DOS PCs back in the days, but under VFP 9 it's much much more stable now.
2. Could VFP and/or DBF stop working some day when the next new generation of servers comes online ?
Case in point: when we upgraded all PCs to Windows 10 (all DBF still reside on separate server) a few years ago, crashes started to happen on a number of applications; it took us a while to find out it's a bug in Windows 10; the crashes went away once the patch is applied.
A non-trivial feature of Excel import (Append From...) is an important tool we use on a regular basis; and it only supports Excel 5.0/95.
It'd be a major inconvenience if it stopped working, even though alternate formats such as CSV SDF are likely still available.
Last and not the least, it's very hard to find VFP programmer nowadays to continue the support effort here.
On popular work site like COWORK, it's very rarely you'd see talent with VFP background; it's all dotnet, shopify, full deck, etc.
Granted, good talent is hard to find in any field; but for another platform like dotnet, you can at least find people who knows the language.
We have been lucky to have retained and retrained one experienced dotnet programmer to do VFP; but it's an exception rather than the rule.
Regards,

Steve Yu
 
Steve,

I've given you my opinions on this issue in your other thread. But just to answer your specific concerns:

has anyone here had issues with DBF, mainly corrupted structures ?

Mostly, no. Or, at least, not in the last 15 years or so. I've been involved in at least a dozen major VFP projects, which between them have hundreds of users. I can honestly say that data corruption is rarely a problem. Of course, other folk might have a different experience.

There are other issues surrounding DBFs, of which perhaps the most important is the relative lack of security. But data corruption is not one of them.

Could VFP and/or DBF stop working some day when the next new generation of servers comes online ?

Well, nobody can say for sure, but I can't imagine a situation where a new generation of servers would cause VFP to stop working, or DBFs to somehow become inaccessible. I don't think you need worry about that. A bigger worry would be the emergence of entirely new user interfaces, which VFP would be unable to support. But I don't see that as being an immediate issue.

it's very hard to find VFP programmer nowadays to continue the support effort here.

That is your biggest problem. For several years now, I've come up against the difficulty of finding good VFP talent, and the problem can only get worse as times go by.

But to go back to your question. You ask about to "SQL" or to "SQL saver" (with a small "s"). SQL Server is not an alternative to VFP. It is an alternative way of storing data, not of developing applications. Also, "SQL Server" (with a capital "S") is a specific product, produced by Microsoft. It is not a synonym for back-end data servers in general. Don't confine your expectations to one particular server.

The point is that it might or might not be a good idea to replace your DBFs with a back-end database (and I mentioned the pros and cons in your other thread). But that is a totally different decision from moving from VFP to a different development environment.

I hope this makes sense.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike Yearwood and Mike Lewis,

Thank you both very much for the valuable advise, as always.
Many areas of concern that all need to be addressed.
Data security,encription,UI all important factors but not high on our priority; as our current system has been doing a good job.
Like Mike Lewis mentioned, finding VFP talent will be our biggest challenge going forward.
We thought about and evaluated a number of new packages; like Mike Yearwood mentioned, they are not cheap and the popular pricing model, SaaS, is like writing a blank check to someone, not very enticing. Plus all our customization features will have to be redone on the new system, if they are allowed.
To rewrite our current coding, as Mike Yearwood suggested, with a new set of rapid-development tools sounds logical but it does not get us out of VFP and all our customizations will have to be redone as well. We'll have to fit all our existing coding under the new framework first before we can enjoy the benefits of the new framework. That itself is a challenge.
Don't see a perfect solution yet.
But sincerely appreciated everyone's time and effort in helping me understand the current VFP landscape (I've felt working alone in an island).

Steve Yu
 
I am actually a bit relieved even Mike Yearwood said "SQL Server is absolutely the way to go".

The major concern about DBFs functioning in todays networks is SMB, you can be sure now and then it still rears its ugly head. That you say you managed to have a stable database with the help of locks, that's a fine point against migration, at first sight, but the issue to not find developers in the long run is maybe even a dead end to that in the longer run.

And the less standardized ways to secure data. There is file system security and the mostly used encryption option is Cryptor. But Xitech does worse than MS, they not only dropped support, they simply don't exist anymore.

The big advantage of a database server is not only that it is specialized on its job, it's still an actively improved product and it enables so many other things to connect to the same centralized data. Web, apps, desktop, whatever. You can go in several directions with HPC and HA/DR.

Every vendor of NoSQL databases makes management doubt the future of SQL Servers, but NoSQL features and techniques have already become integrated parts within SQL Server and other database servers. And NoSQL is not a solution for anything, perhaps not at all for ERP.

From your last post in the other thread:

Steve Yu said:
Our current plan is convert backend to MS SQL and keep all the .prg logic as is, including the locking logic that has worked well; and we have built a small library of SPT commands that performs basic operations that would replace and/or support the current SEEK,APPEND BLANK,REPLACE,etc.

I wouldn't recommend that. As hard as it is to isolate anything from an ERP where all data is integrated the best way for your needs, if you want to make best use of your new backend data, you have to do it in other ways than currently. If you implement a way your VFP logic still works, I bet you won't get happy with it that way.

Say you implemented VFP core language features like a SEEK, what does your replacement code do? Return the one record a SEEK would move to? Then what about SCAN REST or other things? You can only redesign whole processes the SQL Server way, not single features. That's the crux of choosing xbase programming, the atoms you have in the xbase language don't map to the atoms of SQL and you can do something that resembles a seek, but most of the time the seek is just part of a logic that is about a whole group of data and asks to turn xbasse constructs to SQL, not single xbase commands. And how else would you manage to keep prgs as they are now? You'd need dbf repllacements. And the simple way of retrieving all data aiwht a simple Select * from SQLServertable doesn't work out well. Whatever changes one client does will not only need to go to the server so others see it, you would also need all other clients that have the same data in their local cursors update. Even in case they don't work in these records or even notin these tables.

I have seen such simple choice migrations and they work really bad, even though they are only for 4 users on 3 computers. It's not a low cost solution, it's wasted time.

Chriss
 
Steve Yu said:
Questions and concerns:
1. SQL could be a better data depository, however DBF has served us well; has anyone here had issues with DBF, mainly corrupted structures ?

We have several hundred customers who use our apps daily. We find issues at particular sites, for brief periods of time at various sites, but generally speaking everything is stable. We've worked around caching issues and have had relatively good reliability given the large number of transactions recorded each day.

Code:
we've had corrupted ndx and cdx once for a while, but those can be easily repaired without loss of data; and we stay away from memo and/or binary fields and set filter as much as possible. DBF used to crash often when power goes off on MS DOS PCs back in the days, but under VFP 9 it's much much more stable now.

Windows 8 and 10 brought with them some changes to caching. These caused us headaches for a while. Since we worked around those issues, each new release has been pain-free.

Code:
2. Could VFP and/or DBF stop working some day when the next new generation of servers comes online ?

Yes, it's always possible. This is the concern from our company's owners, and they are migrating everything away from VFP over the next five years.

Code:
Case in point: when we upgraded all PCs to Windows 10 (all DBF still reside on separate server) a few years ago, crashes started to happen on a number of applications; it took us a while to find out it's a bug in Windows 10; the crashes went away once the patch is applied.
A non-trivial feature of Excel import (Append From...) is an important tool we use on a regular basis; and it only supports Excel 5.0/95.

There's a free product by Greg Green. It allows you to create xlsx files without having Excel installed. We've used that tool for many years. It's been good.

Code:
It'd be a major inconvenience if it stopped working, even though alternate formats such as CSV SDF are likely still available.

You can always access anything you need in Excel through Automation. If you go into Excel and begin recording a macro, and then stop after you do the thing you want to do, and go into Edit Macros, the source code that's there almost translates 1:1 into VFP code. And if you press F8 to step into the macro, it engages and the xl*Constants can be hovered over to determine their values. Those xl constant header files are also available online.

Code:
Last and not the least, it's very hard to find VFP programmer nowadays to continue the support effort here.

True. There are many, but they're older and have moved on.

Code:
On popular work site like COWORK, it's very rarely you'd see talent with VFP background; it's all dotnet, shopify, full deck, etc.
Granted, good talent is hard to find in any field; but for another platform like dotnet, you can at least find people who knows the language.
We have been lucky to have retained and retrained one experienced dotnet programmer to do VFP; but it's an exception rather than the rule.
[/quote]

They're predicting we'll be using VFP for 10 more years because even after we offer new products, companies will be slow to change.  I'm expecting about a 30% migration initially, and then about 15% per year after that.  Somewhere around there.  We stopped supporting some of our older apps back in the 2000s, and we still have a handful of customers who will no migrate to our newer tools.  It won't cost them more, and they're more secure, etc.  But, many people at various entities like what they're used to.

-- 
Rick C. Hodgin
 
Rick,

Really appreciate your insight and sharing of your experience.

quote:
Windows 8 and 10 brought with them some changes to caching. These caused us headaches for a while. Since we worked around those issues, each new release has been pain-free.
end quote (sorry, don't know how to mimic you guys quote block)

Can you elaborate on ways to work around this issue ?s
We ran into 'insufficient memory ...' issue on a regular basis (always on certain applications on a few PC only) when we first upgraded to Windows 10.
And we know the new PC's have tons of RAM; problem went away mostly after we found a Windows 10 patch, without changing any of our coding scheme.
Just wondering if this is related to the caching problem and what you guys did from VFP side that helped alleviate this headache ?

Your observation of the VFP landscape is kind of what I've been guessing: time to get out while you can.
I heard from someone in another VFP shop (they still sell and support VFP products nation wide) that there maybe someone/some entity that is trying to pick up where Microsoft left off in a community support role.
Are there still any DevCon type of activities going on nowadays ?
It is such a fine development product.

Steve Yu
 
Mike Yearwood and Chriss,

I will post a sample of my conversion code for your critique when I get a chance.

Thanks again for your input and sharing.
It really helps.

Steve Yu
 
Steve Yu said:
Can you elaborate on ways to work around this issues?

We had to change the cache settings on each machine and on the server primarily. But, to get around the performance loss there we also created a system that uses a secondary method of writing adds, updates, deletes to the server, which is to run a custom developed app on the server which communicates with each workstation. The workstations send their data operations to that app, which then verifies that the records requested through the normal table and index operations on the file server were committed as requested. If not, it does the operation for them. It's kind of like running around behind Windows and VFP and any intervening network infrastructure to make sure that every transaction is guaranteed.

In a sentence: We basically wrote a SQL Server type system to run in parallel with the normal file server system VFP uses natively.

We ran into 'insufficient memory ...' issue on a regular basis (always on certain applications on a few PC only) when we first upgraded to Windows 10.
And we know the new PC's have tons of RAM; problem went away mostly after we found a Windows 10 patch, without changing any of our coding scheme.
Just wondering if this is related to the caching problem and what you guys did from VFP side that helped alleviate this headache ?

We have just recently started running into that issue on some operations, and we're not sure why yet. It happens sporadically, but has happened four times on one particular site in the last three weeks.

Your observation of the VFP landscape is kind of what I've been guessing: time to get out while you can.

I disagree with our management. I think VFP is one of the most powerful tools every created. We have a huge software library working that's already debugged, and we're discovering ways to skin our apps to make them look more modern, etc. I don't want to see that incredible development platform die. I've told them they can always run a version of Windows that still works with VFP in VirtualBox in Seamless Mode and it will look like a regular native desktop app.

Instead, they're literally investing millions in another platform that, from the training and migration training we've had so far, is going to take 5x as long to develop as our existing VFP code base for new code and changes / bug fixes / enhancements.

I think it's a completely wrong decision to leave VFP.

--
Rick C. Hodgin
 
Steve Yu said:
I will post a sample of my conversion code for your critique when I get a chance

I'll definitively take a look at that. There is one programmer I don't remember the name, that did some stored procs and functions that emulate all the VFP functions often used in queries, like LEFT, RIGHT, ALLTRIM, etc string functions and some more, so you can use a query running in VFP on SQL Server. And that I don't remember his name I think has to do with that library he sold didn't get much traction. Also a reason I think an approach to bring over xbase like features to an SL Server database is not going to work.

I mean it's easy to see that a seek of some value in a single field index translates to such an index on the SQL server side and a query Select * From table where field=value that also makes use of that index. But as said, when you use that, you can't SCAN REST or such things on this result alone, I doubt this works. So thinking logically you haven't implemented a seek function, but rather sql functions that matter for several types of forms you have. Mainly distinction will be search forms and data maintenance forms.

I have to say SQL Server has read locks. I don't know if you looked into what SQL Server offers in that regard itself with hints: You can query FROM SOMETALBE WITH(XLOCK, ROWLOCK), for example. That resembles a row lock which is unlocked when the transaction that puts this lock on the row level ends. Have you even reached out to an SQL professional to help you find the features you need are perhaps already existing?

Chriss

Edit:pS: I actually never needed such hints, because you can act without them, too. Either with a simple last change wins strategy or with separating data of each users so there is least overlap, i.e. don't let clerrks or storeman or whatever role users have pick any free "todo" item next, but assign it to them. This way there never are two employees accidentally taking on the same item.

 
Mike Yearwood and Chriss,

Attached is a short video clip of customer file maintenance program in action.
Below is part of the xBase code with SQL tables only, after all DBF files have been ported to SQL Server, that relates to the clip.
The syslist.prg is the pick list program in the video clip.
The cursor, aliased c_a, is the stand-in for the original DBF that makes it possible for us to reuse the xBase code.
Take a look and feel free to comment.
Regards,

Steve Yu


************************* partial .prg

Do Case
Case mselcust

select a

*before SQL
*!* Use &m0custf
*!* If .Not. m_combo
*!* Set Index To &m0custf..Ndx &&
*!* Else
*!* Set Index To &m0custCf..Ndx &&
*!* Endif
*!* @ 21,1 Say '***** Searching File *****'
*!* Set Exact On
*!* Seek mcust
*!* Set Exact Off

*cursor alias is mapped verbatim to xBase workarea,in this case 'c_a' to area a; so subsequent xBase logic can carry on as before

dc_sqlexec("select * from &m0custf where CHARINDEX( acct, ?m0acct ) > 0 and custno = ?mCust ", "c_a" )
IF EOF() && build a partial or full cursor
mCustLike = TRIM( mCust ) + "%"
dc_sqlexec("select * from &m0custf where CHARINDEX( acct, ?m0acct ) > 0 and custno like ?mCustLike order by custno ", "c_a")
If Len(Trim(mcust)) <= 2 && must be longer than two letters for ADD option
maddok = .F.
Endif
Store Trim(mcust) To m0key
Store 'custno' To m0field
Store 'ARCUST' To m0file
Do syslist With m0field, m0key, m0file, 4, "AND CHARINDEX( acct, ?m0acct ) > 0 " && pick list

*******************************
*edit customer info, after full screen READ
*
*orignal xBase syntax applies now to cursor c_a w/o change
*

@ 21,2 Say '***** Updating Customer File *****'
* SELECT a && pre-sql
SELECT c_a
Replace company With mcompany, contact With mcontact, Title With ;
mtitle, address1 With maddress1, address2 With maddress2, city ;
WITH mcity, state With mstate, zip With mzip, country With mcountry
Replace phone With mphone + ' ' + mextn, terr With mterr, indust ;
WITH mindust, salesmn With msalesmn, Source With msource, Code With ;
mcode, Type With mtype, pdisc With mpdisc
*
* more replace ......
*
* ' Update change marker, remove locking
*
Replace signature With Iif(signature < 99, signature + 1, 0)
@ 21,0
*
* now update SQL backend
*
SELECT c_a
SCATTER MEMVAR memo
dc_autoupdate( "c_a", m0custf, "", "custno=?mcust" )

******************************************************
 
 https://files.engineering.com/getfile.aspx?folder=6ccce997-3813-4e50-a1ea-7b9a20d58ba1&file=VFP-SQL.mp4
additional notes to the customer maint. logic :
The input prompt for customer number:
1. '?' gets the full customer list
2. a partial customer number gets a matching list
3. a valid customer number goes to full screen display of selected customer
4. a number not found gets an option to ADD.
We have considered ADO record set constructs to replace the SCAN/ENDSCAN/SKIP commands: it does seems, not tested out by ourselves, have a more 'native' SQL update interface; but the syntax is just way too drastically different from xBase to be considered (for example, instead of keeping the xbase 'SKIP' command, you'd need to say something like 'objDataControl.objRecordset.MoveNext', and that is too much)
Also Microsoft has flipped flopped on ADO support over the years; so we are back to the same end of support issue as with VFP.

Steve Yu
 
First off, I see you didn't just implement an analogy of SEEK, but in context of how the SEEK is used in this case, get data for all customers that start with something.
CHARINDEX is a bad choice, though, as it can't be optimized with an SQL Server index. The INDEX in CHARINDEX just means the position of a substring.

With a SEEK you find customer names starting with something. So it needs to be translated to
Code:
acct LIKE ?m0acct+'%'
So an index on acct can be used.

See the discussion here:
If you have good performance with CHARINDEX, then you have a short customer list (not necessary in terms of business but in terms of what's technically short enough so optimization plays a lesser role).

So you do better than you announced in making a contextual translation of code. It will take manual inspection and not simply translate. Overall, yes, once you have cursors, you should be able to make them controlsources of controls or list them in a combobox or similar construct in legacy @..SAY.

So you have to learn better ways of T-SQL.

Here are two examples:

Query Optimization basics:
Avoid SELECT *

Lookout for advice from Brent Ozar and Pinal Dave where they post. Take a look into redgate products related to SQL Server.

The positive feedback is, you actually don't try to convert code 1:1, notice though, that any code using c_a instead of a can't make use of SQL SErver indexs, you'd have to index c_a for that to work. Therefore the strategy to get all data matching an input like m0acct is a good choice, especially since you ask for at least two characters. But also notice once you have that result for 2 characters, any further input just shrinks the result you already have. And then the index on acct on the c_a cursor can help if you retrieved data for acct LIKE 'mi%' to SEEK 'mil' when miller is searched.

It can also make sense to rethink acct. If you store a full name, now you know you can only optimize when a field name starts with something, just like SEEK does (even with EXACT ON), and by the way also LIKE 'something%' does since VFP9, when you store full names and usually lookup lastname. acct then better becomes lastname, firstname.

Further optimiation possibilites arise when using fulltext indexes. But that's a topic for another time, there are still many basics to wrap your head around.

Chriss
 
I wonder about your code of dc_autoupdate

You have to have a parameter [highlight #FCE94F]mcust[/highlight] in that routine so the passed in value of [highlight #FCE94F]m0custf[/highlight] can be used in an SQL UPDATE with [highlight #FCE94F]WHERE custno=?mcust[/highlight].

That needs a very specific dc_autoupdate that's always taking in a customer account number. What about other cases?

Here you have a good option of using private variables instead of local, they don't need to be passed into a routine, but still can be used there, so a general dc_autoupdate would only require passing in the where clause, could act on the current row of the current workarea as source of the data to store with an UPDATE and you can use the variable names you define in calling routines, no need to have them as parameters of the routine, neither LPARAMETERS nor PARAMETERS.

Chriss
 
In very general, use of SPT is perhaps easier to begin with, but when it comes to updatable spt, you need to code a lot of CURSORSETPROP to make the query result of an SQLEXEC do an update on the SQL Serverr backend. It's possible and the alternative to doing a dc_autoupdae.

It becomes easier with the cursor adapter, as you define the necessary information for the updatability in the cursor adapter class, usually using a builder. And then code that uses TABLEUPDATE can be reused. It does help less in your use of REPLACE, though. Which means the major thought of reusable code the cursor adapter was made for still requires "legacy" code that made use of buffering and TABLEUPDATE(), perhaps also after a GATHER, but not code using REPLACE to the DBF files.

So using cursor adapter is for thise who did at least upgrade to recent recommnended code for shared data access with buffering. That's a goal you'd need to achieve first before you could make better use of cursoradapter, so it's at least a sensible choice to only use SPT.

Chriss
 
Mike Yearwood,

quote:
During order entry do you reuse the customer search feature?
/quote

Yes, the search feature, provided by syslist.prg, is called throughout our system.
With proper input parameters syslist.prg, a black box as far as I'm concerned, does the heavy lifting whenever a picklist is required.
It can be easily adapted for new tables with customized column headings.

Steve Yu
 
Mike Yearwood,
Thanks for the help.
To program a new picklist in our black box, syslist.prg, would only take me 10-15 minutes; so it is really efficient.
I believe the MaxFrame 'blackbox' works in a similar way; however I don't think your version will work in character mode on a 80/25 screen.
Ours is ancient design but it works for us and the boss man does not want to change.

Steve Yu
 
Chriss,

Our dc_autoupdate() function only needs name of table and the key to identify the record (row) to be updated; it will parse all fields in the table and replace each and every field with the scattered array, which comes from the query cursor with user edited values. So the function should and will work with any table.

Steve Yu
 
Chriss,

On SEEK and 'any code using c_a (alias) instead of a (work area) can't make use of SQL Server indexes, you'd have to index c_a for that to work':
Actually in some cases we do build a index after cursor c_a is generated so we can continue the use of SEEK in subsequent logic.
How much performance penalty that will incur we are not sure.
Need to experiment creating c_a without building any index and use LOCATE instead; since the cursor c_a is already ordered in key sequence, wouldn't the first locate go fast, and then the subsequent SKIP would perform as desired ?
Yes we are manually inspecting the .prg code and not, and can't, just translate.
It is going to be a long journey but we believe it is going to be worthwhile.

Steve Yu

 
Mike Yearwood,

Thanks for the advise, always.
I do think we have a well performing system and we can force our current business rules onto the new SQL backend, with proper rewrites.
The sequential, one record at a time (vs remote view, tableupdate ) processing logic of our current software kind of matching the SPT single command structure so there is synergy there, maybe an odd way of looking at it but that's my opinion.
Take our order entry program for example: the logic is extremely complex, the picklist is the least worry of mine; I'd feel much more comfortable trying to convert command for command than trying to use MaxFrame or another development tool to replicate the logic. Hate to admit, but this program is like a bowl of spaghettis after 30+ years of revisions and enhancements. There is little reusable parts within the module that can be broken up or organized in any way. And we have dozens of major modules like this.
Of course you can try to rebuild with a better design foundation and etc. But then you'd be better off checking out Netsuite or the like.

Steve Yu
 
Rich Hodgin,

The 'insufficient memory ...' issue on Windows 10 PC:

quote:
We have just recently started running into that issue on some operations, and we're not sure why yet. It happens sporadically, but has happened four times on one particular site in the last three weeks.
/quote

We applied the following fix to Windows 10's registry and the error has gone away.
Worth a try:


Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters

New -> DWORD 32 Bit -> DirectoryCacheLifetime -> Value Data = 0 , Base Hexadecimal


Steve Yu
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top