Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP9 app runs in VFP7

Status
Not open for further replies.

foxmuldr3

Programmer
Jul 19, 2012
170
US
I have an app.exe created in VFP9. If I go to VFP7's command window and go to that directory type "do app.exe" it runs There is no "app.prg", but instead a "start.prg" that is compiled into "app.exe".

I get a few error messages here and there with properties that didn't exist in 7, such as "frmName.Themes", etc., but it runs.

Does anybody have experience with this?

Best regards,
Rick C. Hodgin
 
I was also able to run it in VFP6.

Best regards,
Rick C. Hodgin
 
Well, the object code is quote downward compatible, you can run an app with no specific vfp9 properties, commands and functions with older runtimes, too.

I use this all the time, not with DO, but with NEWOBJECT(). I'm using apps as kind of DLLs with foxpro classes for interfaces accessing data for a variety of applications writtin in VFP7-VFP9 at a customer. This way I don't need to hand out new vcx libs to every developer, but they can simply swap the app file (granted it's been tested and passed the downward compatibility test).

You can use VERSION() of course, to branch off in version specific code, if there is some command, that runs muuch more efficiently in newer versions and is "emulated" with old code for earlier versions.

VFP is not compiled to assembler or C, it's compiled to opcode/bytecode, and these bytecode values don't change just because the language is extended.
Again Christof Wollenhaupt is showing a bit about that here, not the full blown specification, but quite a bunch to get the grip:
Bye, Olaf.
 
And an exe is an app with a runtime loader loading the specific runtime it's compiled with and for. So an app is just an "archive" of the fxps and vcxes, etc. and the article on the FXP format therefore also is valid for the inner contents of an APP file.

Bye, Olaf.
 
When accessing prior version *.??X files (forms, menus, classes) in a newer version of VFP it prompts for the conversion or auto-converts. I am surprised older code would run the newly converted formats because once you converted you can't go back and edit the *.??X without manual manipulation of the file as a table.

It seems VFPn.EXE is merely a vehicle for handling OSS-specific code (such as rendering a window, receiving mouse and keyboard input, and windows messages), whereas the VFPn*.DLL files handle the actual computing of embedded / stored data.

Best regards,
Rick C. Hodgin
 
>When accessing prior version *.??X files (forms, menus, classes) in a newer version of VFP it prompts for the conversion or auto-converts

That's a different issue. Eg Anchoring may be added, if you "convert" an SCX. You can't then go back to edit this in VFP7 or earlier, true. But then you're using new features already. There are some automatisms, which add defaults etc.

As you can't "edit" an APP, that's all different. You don't go back, you just can execute as far, as you don't stimble upon things unknown to the older runtime.

> I am surprised older code would run the newly converted formats
What are you meaning with this as an example? Older code just meaning the older VFPn.EXE or VFPnR.DLL?

The diff betwenn VFPn.EXE and VFPnR.DLL is mainly, VFPn.EXE is containinig the IDE itself. Some things like BUILD are not in the VFPnR.DLL, only in the VFPn.EXE. And when you run the IDE you can even rename, move or delete the VFPnR.DLL, it's not used within the IDE, the IDE EXE is self contained. So there is an overlap of code in VFP9.EXE and VFP9R.DLL, the VFP9.EXE is not among the files you're allowed to redistribute, as it has all the exclusives to BUILD Apps, DLLs or EXEs, while the runtime is having the Foxpro Bytecode interpreter shrinked down to the runtime features.

So besides IDE exclusive features both are the same, the one is the IDE, the other the Runtime, when executiung an FXP or SCX, both load Objectcode and parse it as shown by Christof and then make the appropriate calls to the internal implementation of the Foxpro code commands/functions etc corresponding to the COMMAND or FUNCTION byte codes. This is just what an interpreter does in contrast to a compiler. Both the runtime or The IDE maintain a stack and a program pointer and and process the bytecode, like a JAVA virtual machine processes the JAVA bytecode. Which also was the mechanism the Mac Verision of Fox worked with the same FXPs, as you simply only would need to swap the runtime, the FXP contains no CPU or OS specific code, that's all in either VFPn.EXE or the VFPnR.DLL

Of course you also know many parts and tools of the IDE are not inside VFPn.EXE, but are Fox Code themself, eg all you also find in XSource.ZIP and the Reporting Engine in REPORTBEHAVIOR 90 mode.

Bye, Olaf.
 
Very scary. I can't see how this behavior is desirable as it will only introduce errors during runtime unless you happen to ONLY use those features which are entirely encased within the runtime DLLs, and even then ... what are the variables? Data corruption seems possible. Memory corruption seems possible.

Olaf said:
That's a different issue. Eg Anchoring may be added, if you "convert" an SCX. You can't then go back to edit this in VFP7 or earlier, true. But then you're using new features already. There are some automatisms, which add defaults etc.

I understand. But as VFP makes an (arguably unnecessary) distinction between 2.x (and earlier) tables and 3.0 and later tables on first use, and an (arguably unnecessary) distinction between SCX and other format files in N.0 and (N+k).0 and later versions, it seems odd that such a product as the literal, final, compiled executable, which is the actual app itself, should be able to run outside of the same version of FoxPro.

I say again: scary.

Best regards,
Rick C. Hodgin
 
I see that downside, too, yes. But the late detection of errors also is a chracteristic of VFP due to it's weak typed nature. The further downside of such bytecode is, that it does not compile optimized to a certain CPU, you always stay on the optimisation level of the runtime.

Also, if you do like most VFP developers, you compile an EXE and the EXE restricts you from the use of a wrong runtime, it sepcifically loads the runtime version, that was embedded at compile time and errors on not finding that version. But you can indeed use the wrong SP of the correct main version, that's not detected by the EXE runtime loader.

And the VERSION() function wouldn't make any sense whatsoever, if VFP wouldn't allow newer runtimes running older code or older runtimes running newer code. What could be done to improve the reliability is to add VERSION(6) as the compile version and perhaps also be able to declare an overall upwards compatibility, eg "Runs with VFP X runtime", which of course would not be a reliable fact, but a developer promise about the code compatibility.

Otherwise the VERSION() function should only be valid at design time for version dependant compilation of code (#IF VERSION...), which indeed also works right now.

But for the moment you can compile both this
Code:
#IF Version(5)>=700
    ? "VFP 7 or higher"
#ELSE
    ? "VFP 6 or lower"
#ENDIF

Code:
IF Version(5)>=700
    ? "VFP 7 or higher"
ELSE
    ? "VFP 6 or lower"
ENDIF

The difference is, the first code with Preprocessor #IF will compile to code only containing either the first or the second ? statement, while the second code sample will contain both and decide what to output at runtime.

It may or may not be desirable, like anything this is up to the developer. At least we have that freedom, it's up to us to let something like that into the wild or not, I actually do the secondary approach, but it's for experts, that's for sure. And it requires more testing. It pays for me, to not need to mainain and/or even just compile 3 or 4 versions and I only ned VFP9 installed for myself. I even don't know exactly how many apps use my APP, that's up to our customers IT department.

Bye, Olaf.
 
Olaf said:
At least we have that freedom, it's up to us to let something like that into the wild or not, I actually do the secondary approach, but it's for experts, that's for sure.

I've been developing in Fox software (Multi-User FoxBASE+ 2.1) since 1987 ... I've never known of this ability until a co-worker accidentally discovered it the other day. It doesn't seem well publicized, or if it is I've been living in a shoebox. :)

Still ... nice to have the freedom. Scary to have only the few details I've been able to find on this issue.

Best regards,
Rick C. Hodgin

PS -
Olaf said:
But the late detection of errors also is a characteristic of VFP due to it's weak typed nature. The further downside of such bytecode is, that it does not compile optimized to a certain CPU, you always stay on the optimisation level of the runtime.

The benefits of such a design greatly outweigh the downsides in my view (though they do fall short on VFP as it is only coded for Windows ... would be much better cross-platform).
 
The benefit of byte code for cross plattform compatibility is also a design used by Java, sure. But also the reason for so many java VM updates and trouble coming from incompatible VMs and Java apps only running with a certain VM version.

The VFP team paid much more attention to the downward compatibility of VFP and it's runtime. All that deprecated in VFP9 is it's documentation, several functions and commands help topics only point to newer alternatives, but you can still execute @SAY @GET in VFP9.

CLR code also goes that route, but at least tries to JIT compile into code specific to the real CPU used, Intel/AMD, Atom or whatever. Even within the single Windows platform, there are quite some CPU platforms allowed, and even just the automatic optimistions to the CPUs instruction set should help make better use of that specific computer your code runs on.

The need to first parse bytecode, to then branch into the runtime implementation corresponding to it, is a performance downside in comparison to simply running cpu specific code, once it's compiled, isn't it? It's of course easier to develop a platform specifc runtime interpreter, than a platform and cpu specific runtime compiler. I would rather prefer the latter, if I have the choice, but as the developer of the language I would of course prefer the interpreter runtime and just one compiler creating bytecode usable for all platforms.

Bye, Olaf.
 
It depends on what you're doing with your code. In most cases, the slowest part of the equation isn't executing bytecodes, but rather performing the operations those bytecodes direct, such as opening a table, doing a mass replace, creating an index, executing a query, etc..

However, for straight-forward execution and logic, even the bytecode environment of VFP can be streamlined because only about 1% of source code in most cases is responsible for 80% or more of the computational time involved. As such, a few well-placed calls to C++ DLLs and the bulk of that slow-in-VFP workload can be replaced with something faster.

But, I do agree. Having an interpreter capable of executing bytecodes, coupled to a JIT compiler which writes optimal code for the target CPU upon repeated use, it's all quite nice, though probably in 99% of cases, it would be unnecessary given the high ratio of non-bytecode execution time in native VFP functions, compared to bytecode execution time.

Best regards,
Rick C. Hodgin
 
OK, it's true bytecode execution time is low, but if you do some computations with variables in looüps and those lines can be accelrated to an equivalent execution in eg cpu registers, and the hole loop can be cached in the cpu near onchip memory, this could make a major difference.

You're right, it is the job of a developer to find such bottle necks and outsource such parts of code to a DLL usage. But often enough VFP developer can use some DLL, but not write one. I also only have shallow C knowledge, today.

Bye, Olaf.
 
It's also worth noting that modern CPUs will draw data used regularly into L1 cache, no matter where it is in main memory. This will significantly increase the computational time in loops on small data sets, and that even outside of registers.

Registers are typically 1 or 0 clock cycle access depending on whether or not they can be coupled with other instructions. L1 cache access on modern Intel CPUs is about 4 clock cycles. On AMD it's about the same, though both architectures vary.

While registers are undoubtedly faster, depending on the computation there would still be spilling and filling of the available registers (in 32-bit compiled code, there are only a few registers available for use (eax, ebx, ecx, edx, esi, edi, and possibly ebp though it's typically associated with the stack for parameters and local/temp variable storage) for integer processing).

If you can guarantee that you could always use ecx for your loop counter, and eax for your accumulator, and esi and edi for pointer references to source and destination offsets for input and output of foreign data, and ebp for local data, then you'd only have ebx and edx available for general processing use (such as performing some computation) before you need to spill and fill). And there you lose a few clock cycles here and there because of spilling and filling (which require memory accesses themselves), then you're not that much better off than if you would've only used memory references throughout your code.

In the end, such instances might execute more slowly, but only until they're pulled into L1 cache, which means on repeated loops it wouldn't be that much slower. Maybe 4x slower in total, but when you're talking about the speed of register-only loops ... even only 4x slower is blazingly fast. At 3.0 GHz, that could be several hundred million computations per second per core.

The bytecode solution is ingenious for general purpose code because it allows code to be written once, run anywhere. Where people make it hard is when they try to squeeze as much performance out of something as possible, or optimize for this, that, or the other thing, making the entire engine more complex than it needs to be for a marginal increase in performance.

We'll see though. In time. :)

Best regards,
Rick C. Hodgin
 
I was used to have even just Accumulator X and Y register on 6502 (VC20/C64) and some more on Motorola 65000er (Atari ST), but I never programmed x86 assembler. At university I spent some weeks with IEEE pseudo assembler, but that was not much.

>modern CPUs will draw data used regularly into L1 cache, no matter where it is in main memory.

Makes me wonder, why this raytracing.prg takes so much time:
It's surely not, what VFP is intended for, but I'm sure the same done in C++ is much, much faster, due to being compiled.

Bye, Olaf.
 
For comparison download POV-Ray and render this scene:
Code:
  background { color <.5,.5,1> }
  camera {
    location <0, 0, -3>
    look_at  <0, 0, 0>
  }                   
  

  sphere{ <0,.667,.0> .5  finish { reflection {.9} ambient 0 diffuse 0 }}
  sphere{ <-.577,-.333,.0> .5 finish { reflection {.9} ambient 0 diffuse 0 }}
  sphere{ < .577,-.333,.0> .5 finish { reflection {.9} ambient 0 diffuse 0 }}
    
    
  plane { <0, 1, 0>, -2
    texture {
      pigment { checker color <0,0,0>, color <1,1,1> }
    }
  }
  light_source { <2, 4, -3> color <1,1,1>}

Bye, Olaf.
 
It's not just because it's compiled, it's the protocol used for parameter passing, return results, how data is processed. Native cpu-based implementations will use the facilities of the silicon to handle data. A language like VFP's bytecodes must follow protocol and function calls for data use by the requirements of its design. These may also include synchronized thread-level access via protocol.

My comments above about 4x slower related to differences in a register-only algorithm and one which uses memory too. It was not specifically related to VFP.

Best regards,
Rick C. Hodgin
 
>It's not just because it's compiled, it's the protocol used for parameter passing, return results, how data is processed.

Well, in the end, i=i+1 or other variable calculations are done much more efficient in C than in VFP. It's more about this variable struct than the bytecode, that's true. Still the bytecode of i=i+1 itself is giving an overhead to that, which awfully slows down such things. In c this + oparand on an int is really just incrementing the 4 byte memory, which hold the int value. In VFP..., well, I've shown the general variable struct.

It's not a big problem, as business apps seldom need to calculate much more than a sum of some order item prices ;). Well, well. Sometimes I'm soo bored...

Bye, Olaf.

 
Depends on what you're doing. If it's mostly data processing ... you'll never see the advantages of a faster core language execution engine. If it's lots of logic or memory variable processing, then yes much faster.

I think having the best of both worlds would be more desirable. An xbase-like language able to also include blocks of C code for certain computations, or to call DLLs, would be desirable. In fact, it would be nice to create a framework which allowed xbase-like source code to be converted to C code, and have the requisite FLL and DLL calls automatically generated using templates, but to have the variable processing in C.

Shouldn't be too difficult. Would definitely be faster. In some applications, it might even be more desirable.

Such an implementation would make a nice "optimizing xbase compiler" language.

Best regards,
Rick C. Hodgin
 
Yes, that sounds good.

For short I had the idea of an FLL implementing it's own memmory object m, to hold C style variables. But what I fail on is on an easy way to make these new variables accessible from the VFP side, then.

Being able to "include blocks of C code for certain computations" or "a framework which allowed xbase-like source code to be converted to C code, and have the requisite FLL and DLL calls automatically generated using templates" would be more realistic options as they are not as fine granular as single variables.

A new xbase like langauge could implement it's variables in a more optimal way in itself, most certainly.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top