Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

State of VFP, is DBF a viable platform going forward, or SQL server is better 3

Status
Not open for further replies.

Steve Yu

Programmer
Nov 6, 2021
72
0
6
US
Hello,
Very glad I found this forum where VFP is still actively discussed and seems people still actively working with the technology.
Having delayed our migration plan many years (don't fix something that's not broken), we finally decided it's time to move forward.
Questions and concerns:
1. SQL could be a better data depository, however DBF has served us well; has anyone here had issues with DBF, mainly corrupted structures ?
we've had corrupted ndx and cdx once for a while, but those can be easily repaired without loss of data; and we stay away from memo and/or binary fields and set filter as much as possible. DBF used to crash often when power goes off on MS DOS PCs back in the days, but under VFP 9 it's much much more stable now.
2. Could VFP and/or DBF stop working some day when the next new generation of servers comes online ?
Case in point: when we upgraded all PCs to Windows 10 (all DBF still reside on separate server) a few years ago, crashes started to happen on a number of applications; it took us a while to find out it's a bug in Windows 10; the crashes went away once the patch is applied.
A non-trivial feature of Excel import (Append From...) is an important tool we use on a regular basis; and it only supports Excel 5.0/95.
It'd be a major inconvenience if it stopped working, even though alternate formats such as CSV SDF are likely still available.
Last and not the least, it's very hard to find VFP programmer nowadays to continue the support effort here.
On popular work site like COWORK, it's very rarely you'd see talent with VFP background; it's all dotnet, shopify, full deck, etc.
Granted, good talent is hard to find in any field; but for another platform like dotnet, you can at least find people who knows the language.
We have been lucky to have retained and retrained one experienced dotnet programmer to do VFP; but it's an exception rather than the rule.
Regards,

Steve Yu
 
I stopped using DBF’s completely in 2004 and switched to MySQL. This has been very good for us, with a much better performance, maximum stability and easy backups.

Regards, Gerrit
 
We have set these values:

Code:
lanmanserver\parameters\smb1 = 0
lanmanserver\parameters\smb2 = 1
lanmanworkstation\parameters\directorycachelifetime = 0
lanmanworkstation\parameters\fileinfocachelifetime = 0
lanmanworkstation\parameters\filenotfoundcachelifetime = 0
lanmanworkstation\parameters\useopportunisticlocking = 0
mrxsmb\parameters\oplocksdisabled = 1
lanmanserver\parameters\enableoplocks = 0

We still have the memory issue periodically on a couple sites. We are thinking their AV might be locking down the running processes in some way.

--
Rick C. Hodgin
 
Gerrit Broekhuis said:
I stopped using DBF’s completely in 2004 and switched to MySQL.

I have used various DB back-ends over the years. There are none so fast as VFP's native engine. And in cases where you have network issues, you can often make a local copy of the data and run it locally, and then just push your changes back up one-by-one.

The apps I work on presently do not do this, but I did this for a system that used IBM's DB2 for its back-end. For various reporting, we would retrieve a copy to local files, process it, and then do updates. It wasn't a real-time system so it worked.

I still prefer VFP's engine, but it is showing its age. We will be moving to SQL Server and away from VFP completely over the next 5 to 10 years.

--
Rick C. Hodgin
 
Hi Rick,

Sorry can't help more on the 'insufficient memory ... ' error; our problem went away as soon as we applied the regedit cache fix.
I totally agree with you on the VFP native engine, perfect fit for real time apps for small to medium sized companies.
I wrote an app for a retail shop years ago that operated 7 days a week, 10 hours a day with about 150 point of sale PC's located in three cities, sharing real time data via RDP. That was before broad band Internet and yet performed very efficiently. Of course until an investment banking entity bought them out and put everything under their network.

Steve Yu
 
Steve Yu said:
Our dc_autoupdate() function only needs name of table and the key to identify the record (row) to be updated; it will parse all fields in the table and replace each and every field with the scattered array, which comes from the query cursor with user edited values. So the function should and will work with any table.

I see, but that's not exactly what your code shows:

Code:
SELECT c_a
SCATTER MEMVAR memo
dc_autoupdate( "c_a", m0custf, "", [highlight #FCE94F]"custno=?mcust"[/highlight] )

You pass in the source cursor of the edited record to store, you pass in one value m0custf I don't know what that's about but then you pass in the highlighted clause "custno=?mcust". To me it looks like the clause that identifies the record.

c_a should have the custno field from the SQL Server table and it would be okay to make use of this clause parameter. You don't have to pass in the value for mcust, though, because when you have such a variable as private variable in the caller, that private variable can also be used from dc_autoupdate.

Before I deep dive into the wrong details, tell me what are the parameters of dc_autoupdate and their meaning?
It's quite clear c_a is the cursor that provides the record to update or insert in the dc_autoupdate. m0custf? - no idea, ""? even less speaking and "custno=?mcust", I guessed it's the where clause that identifies the record.

The other thing that irritates me is why you scatter the current record of c_a into variables. dc_autoupdate will still need to get field names from c_a directly and could at that time also use them directly instead of the field values scattered to variables. Besides you scatter to variables in the caller code, where you don't need or use these variables, that's failing encapsulation. Passing the c_a cursor name already gives dc_autoupdate all it needs to get field names and values of the current record of c_a.

Another sidenote: VFP9 offers a much better scatter option to sccatter fields to a simple object that has all field names as members set to the field value. You could SCATTER NAME oRecord and that also can be done within the dc_autoupdate function or you don't scatter at all and use ?c_a.fieldname in the query.

SPT query results you make updatable are a much better way to not need dc_autoupdate at all and just use TABLEUPDATE(). As Mike Yearwood observed here, you're replacing bandages with other ones. SCATTER MEMVAR should never ever be used again. When a cursor record is the source of parameter values of a parameterized query, be it the data you store or use for joins condition or where clause, you can directly reference the cursor field name, not variables.

And the other general thing I wanted to bring through to you is using private variables to be able to use them in a query text means you don't need to pass them in. So say you stick to SCATTER MEMVAR, that produces private variables with the field names (unless there are local variables of the same name, already!), and you then have access to them in dc_autoupdate, too, without passing them in. I wouldn't use SCATTER without NAME clause, but if you do you can't know which variables came from c_a, you can't assume all private variables are field values, so you need "c_a" to find out the field names with FIELD(n), Well, the SQL query I think you build up in that routine can use these field names as field names instead of using them as variable names, that even makes scatter completely unnecessary. But you're programming TABLEUPDATE() with this routine, that means you reinvent the wheel here.

Chriss
 
Chriss,

Thank you very much for the critique.
1. "parameters of dc_autoupdate and their meaning"
you guessed correctly on c_a and custno=?mcust; the second one, m0custf, holds name of the table; in this case, it is the customer table for one of the subsidies user is operating under (code sharing for all companies with separate sets of tables); the third parameter,"", is obsolete now; it was designed to hold list of fields to be excluded such as the auto-increment field.
2. I'm not familiar with TABLEUPDATE() at all; do you not need to set all those isolation levels,etc to sync the updates ? In our case it is almost always one record updated in each call, even in batch mode where lines of items on a given order needs to be updated. No remote views or such.
3. SCATTER NAME 0Record is a very good suggestion; however how would I traverse the object to get field name/value as I'm doing now using fcount() with c_a ?
Also I tested the SPT command 'update m0custf set fontsize= ?c_a.fontsize' as you suggested and it does not work (it prompts me for value of c_a.fontsize)
my old syntax 'update sysport set fontsize= ?m.fontsize' works with the scatter memvar construct. I do agree I might and should move the scatter command inside the called routine dc_autoupdate instead to avoid variable/field name ambiguity.

Regards,

Steve Yu
 
members gives you field names of oRecord. You can alsos till use field and fcount, what's the problem? The advantage is not needing to fear the name ambiguity.

'update m0custf set fontsize= ?c_a.fontsize' didn't find the table m0custf, right? Always get the error message from AERROR(), if a query doesn't work

Yes, it does work, use a simpler example like this to see it working:
Code:
create cursor foo (bar c(4))
insert into cursor foo values ('data')

Text To cSQL
Create Table #foo (bar char(3))

Insert into #foo values (?foo.bar)
Endtext
SQLExec(h,cSQL)

Chriss
 
Regarding TABLEUPDATE() and how it relates to isolaton levels: I think you have a wrong conception. Isolation levels already have an effect on what you read with a SELECT. If you're thinking with transactions: They make sense anyway, if you don't do transactions manually, each update will be a single transaction. SL Server does log every action in a transaction log, so there is nothing that works without a transaction. In VFP you can do without transactions, but one thing also is done in VFP anyay: Locks. If you don't manually lock operations on data do an automatic row or table lock, see help on automatic locking.

You have to think about isolation levels and such things anyway, but they are a totally separate topic to TABLEUPDATE. You also don't need remote (updatable) views to use TABLEUPDATTE, I already said this:
myself said:
but when it comes to updatable spt, you need to code a lot of CURSORSETPROP to make the query result of an SQLEXEC do an update on the SQL Serverr backend.

Updatable SPT means you make the result of a SQLExec query updatable. And that's possible. You can even do that just before TABLEUPDATE(), it's only necessary to buffer changes, because TABLEUPDAE() does process the buffered changes. If you like all of them in one TABLEUPDATE.

The point of TABLEUPDATE() itself is about processing buffered changes, and the buffers are on the VFP side of things, TABLEUPDATE does also work with DBFs and actually replaced the whole SCATTER/GATHER concept. What you miss when you do SPT instead of views or cursoradapters is the possibility to use REQUERY(). What you set as updatablefields, keyfields, wheretype and I surely forgot more, is all about how to build up SQL for the changes - you programmed something that's already nativley built into VFP.

So to take in Mike Yearwoods advice to do transactions, you can execute:
Code:
SQLEXEC(h,'BEGIN TRANSACTION')
lRollback = .t.
If Tableupdate('Orders') 
   If Tableupdate('Orderitems') 
      If SQLEXEC(h,'COMMIT TRANSACCTION')>0
         lRollback = .F.
      Endif
   Endif
Endif
If lRollback
   SQLEXEC(h,'ROLLBACK TRANSACTION')
Endif

Chriss
 
Chriss,

After some more testing, I've confirmed that:
1. you are right about using cursor alias ?c_a.fontsize direct for UPDATE instead of ?m.fontsize after SCATTER memvar.
2. good suggestion on using SCATTER NAME oRecord, instead of SCATTER MEMVAR to avoid fieldname/variable mix-up; it does still require traversing fcount() of cursor c_a to expose the field names in oRecord; but that's reasonable overhead.
3. still not sold on tableupdate() or transaction processing (in general terms). Maybe it's philosophical, but here we are trying to adapt SPT to our xBase coding, not the other way around. I know very well how VFP rlock() and flock() work and how VFP would auto lock without the rlock() or flock() commands being issued (rarely done since we always check before updates). Customer table,item table, order table,etc are constantly updated in real time by nearly 100 online users in three states, some days thousands of orders are generated and processed each day. No bottleneck, no degradation in performance.
4. Transaction/Rollback. Under VFP, this feature is a no go from the start: all tables need to be opened exclusively. My consultant abandoned a SQL Transaction/Rollback project years ago for a Fortune 500 company because of performance issue. Regardless, I don't think it fits our xBase model.

Regards,

Steve Yu

 
Steve Yu said:
Transaction/Rollback. Under VFP, this feature is a no go from the start: all tables need to be opened exclusively.

Wrong, you only disable writes to DBFs involved in a transaction until it is rolled back or committed. And that's the reason for the strict rule to keep transactions short. Notice you just start the transaction when you already know which changes to save (from the buffer). So a transaction isn't at all meant to replace locking. It exists to protect the integrity of data to not introduce half baked changes that are incomplete.

You can use transactions while other users have the table open shared, that's not a problem.

As long as you don't want transactions you haven't understood them. If you undertand them you want to have them at any price. They don't degrade anything like processing speed, they assure quality of the data.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top