Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL varchar(max) DataType property for field is invalid

Status
Not open for further replies.

Nro

Programmer
May 15, 2001
337
CA
I have a very annoying problem. I’m using VFP 9 as a front end and for the first time, SQL Server as the back end. I develop in Fox for 34 years now and .NET (with SQL Server)

The problem is with the SQL varchar(max). I can create a cursor with a CursorAdapter. I can read the SQL table and it will translate the SQL varchar(max) into a memo field in VFP.

But if I try to create a parameterized remote view, (CREATE SQL VIEW (lcView) REMOTE CONNECTION (lcMainODBCDataSource) AS SELECT …), try to open it, I always receive “DataType property for field ‘Aa_Mem01’ is invalid”

I know I can use the CursorAdaptor to do CRUD but all my classes are build around remote views (tableUpdate etc…)

I’m using ODBC Driver 17 for SQL Server in both cases.

Thanks for your help.

Nro
 
You can map fields and their types in RVs too.


Borislav Borissov
VFP9 SP2, SQL Server
 
You know the problem and the fix. The cursoradapter overcomes the problem of inferring the wrong data type C(0) for varchar(max) by overriding the result cursor structure with its own definition, by UseCursorSchema=.t. and defining the result cursor schema, telling ODBC what to convert to.

I think remote views offer the same by other means. When you look into the SQL mode of defining remote views you see a list of [tt]DBSetProp(ThisView+".fieldname","Field","DataType",[highlight #FCE94F]"C(0)"[/highlight])[/tt]

You'll find this wrong type inferring here, too and you can override it similarly, just change that to M and the view will work.
I just remember other guidelines and recommendations by MS about ODBC driver choice for legacy applications like VFP applications. Not only because of the 32bitness of VFP. Some drivers are especially targeted towards PHP, others for legacy clients. I don't find the guidelines or documentation, but there should be something along these lines somewhere from MS.

Try it. As far as I know, ODBC drivers by the 3.x ODBC standard also have backward compatibility to how I guess VFPs remote views or SQLEXEC etc use ODBC drivers by the 2.x ODBC standard definitions. So despite certain recommendations by MS, you should be able to use any newest driver. I think there are further quirks lurking, but when this works I think the worst problem is solved.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Ok, field mapping was the problem.

In my program, just before opening the remote view, I mapped the varchar(max) as C(255). I just changed it to be mapped to M

lcViewField = "Aa_Mem01"
DBSetProp(lcViewField, 'Field', 'DataType', "M")

And it worked as expected.

Thanks for your input.

Nro
 
Olaf's solution should work well with existing RVs. I have used it myself. In addition, you can execute:

[tt]CURSORSETPROP("MapVarchars", .T., 0[/tt])

This establishes a default for all new RVs created in the current session, which could save a bit of time if you are about to create many views. The setting is not persistent across sesssions.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
C(2556) simply exceeds the maximum width for char fields in VFP.
That doesn't even relate to how ODBC drivers wrongly infer varchar(MAX) as C(0) fields.

Now, don't ask me why the limit is 254 and not 256 as you would expect from knowing your powers of 2 or 255 for some strange reason of needing to stay just below 256. Why 254?
I might know it ;), but the simple reason is, it's documented in the topic "Visual FoxPro System Capacities".

Bye, Olaf.

Olaf Doschke Software Engineering
 
Well, traditionally, if a field (or a string or a variable or whatever) is limited to 256 bytes, it suggests that the system stores a count of the actual number of bytes somewhere, and that count is itself held in eight bits, which means you have a maximum of 2^8 (= 256) values for the length. But if you also allow for a length of zero, it becomes 2^8 - 1 (= 255). And if the count is itself part of the field, the effective length becomes 254.

But I don't think that's the answer. And why should a character field have a limit of 254 while open Browse windows or characters in a report expression have a limit of 255?

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
It's actually not at all certain. But everything points to storage block sizes.

The hints don't really start in the DBF file structure but in index file structures. When you look into them, you find structures with 512 bytes or 1024 bytes length:
For example, the IDX header has 512 bytes up to "key expression pool length" and then 512 more bytes for "key expression pool (uncompiled)". And in the same topic node records have 512 bytes length.

When you define a dbf just having a c(254) field, the RecSize is 255. Still room for 1 more byte, or actually 257 more bytes, with 512-byte blocks in mind.
The RecSize of different record structures is puzzling, if you don't know the details, here are some hints:

Code:
Create Cursor crsTest (ctest c(254))
? Recsize()
Create Cursor crsTest (ctest c(254) null)
? Recsize()
Create Cursor crsTest (ctest c(254) null, ctest2 c(254))
? Recsize()
Create Cursor crsTest (ctest c(254) null, ctest2 c(254) null)
? Recsize()

If you ask me, the limit wasn't set to the best value for achieving record sizes that can align with storage device block sizes. When you'd think of the simplest record only having a single char field and have the 512-byte blocks the limit should be 510. And indeed you could make the limit much more open and leave it to the developer to optimize record structures for storage block size alignment since that will strongly depend on the record structure, which will usually have at least one more id field.

And even in the 80s the usual memory wasn't drum-memory. It's hard to reconstruct what was considered in the definition, also because much of it was inherited from dBase file structures.

And when you look in detail, the file structure definition is wasteful with the full byte for the deleted flag. Especially when you know that the record only grows by 1 more byte for every 8 nullable fields, so a nullable field only gets 1 bit of a null flags byte, they could have used the deleted flag byte to store the first 7 null flag bits besides one bit for the deleted state. So, question after question:

Why so economical with null flag bits but not the deleted flag?
Why not orient to 512-byte blocks and allow max length of 510 bytes per char field?
Why not orient to a record structure that also has a 4-byte integer id/key field and lower the limit to 506?
Or why not leave it to the developer to align RecSize to device block sizes and define limits, which look more intuitive to developers, like 1024 or even 64kb limit?

Anyway, it is, what it is.

So I think the only final answer is because that's what's documented in system capacities. Sorry for wasting your time.

In the end, there is a point in that nobody needs more than 67 characters in a string. "Pippilotta Viktualia Rullgardina Krusmynta Efraimsdotter Långstrump" only has 67 characters. See?
I close my case.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Olaf, I was referring of the “Option / Remote Data / Use memo for fields >= 255”. In SQL Server, I can have a varchar(500) and the remote view will map it as a memo. If I have a varchar(max) in my SQL Server table and I don’t manually map it (DBSetProp) as a Memo, VFP doesn’t recognize it.

Thanks again for your input.

Nro

 
Nro,

It's totally clear SQL Server doesn't have the same restrictions, but you were saying you mapped the SQL Server Varchar(Max) to c(255), weren't you? You can't, because that's breaking a VFP limit.
And automatic mapping will be Varchar(Max) to C(0), erroneous type inferring. Mike contributes that this wrong mapping can also be avoided with CURSORSETPROP("MapVarchars", .T., 0),

but C(255) isn't a possible type in VFP, no matter if for mapping to an SQL Server column or not. It's simply beyond what VFP allows in a char field.
And therefore that has absolutely nothing to do with ODBC driver problems. Well, at least only indirectly. When your manual override for the wrong C(0) is C(255) you make the assumption C(255) is valid in VFP, but it isn't And M is the much more natural manual override anyway. As it can also exceed a few thousand bytes, if necessary. In theory you can have 1 record with about 2GB memo value size. But only one record, as you then break the size limit of the fpt file.

Anyway, yes, it is an ODBC problem to map varchar(max) incorrectly, and a strange decision for C(0), but it isn't an ODBC driver problem when your manual override isn't valid, too.

It's totally forgivable to assume C(255) would work. Nobody would assume char fields to be limited by 254 bytes width. If I didn't already knew, I wouldn't assume it's limited by anything below at least a KB, too. It's really something you can't know, until you learn it.

You'd realize C(255) isn't a driver related problem when you override the DBSetprpo to use C(255) in any other remote view using any ODBC driver not having the varchar(max) mapping problem at all, like the legacy "SQL Server" driver (sqlsrv32.dll). It's simply not related to ODBC, it's just exceeding VFP limits. You also will need to map any varchar(255) or longer to M or C(254) or V(254), if you don't care for value truncation and are more interested in keeping the aspect of max length. But with varchar(MAX) it's just no good idea to map to a max size field type of VFP anyway, you look for something that like "MAX" has maxed out limitation, ie no practical limitation, and that's memo.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Let me put that separately:

Even aside from ODBC driver problems, you have a mapping problem of fields VFP won't be able to map because of its limitations.

And it's not a general type mapping problem, it's a length limitation problem coming from VFP.

If SQL Server has a char(1000) or varchar(1000) field, you can pick to map this to a shorter version of char/varchar in VFP, ie C(254) or V(254) or you map to M.
Both mappings have one or another problem. The shorter VFP types will truncate values longer than 254 bytes from SQL Server, your VFP application won't be able to read out the full SQL Server stored values. And so it might already be a good idea to use M not only for varchar(MAX) but any possibly longer character field type. The only problem of mapping this to M is that now the VFP frontend enables users to write longer than 1000 characters, which then is truncated on the SQL Server side with its 1000 max size definition of the field.

That's making it a little bit more complex than just a totally missing data type. VFP does have char and varchar. just other width limitations. So there is no 100% mapping possible for SQL Server fields exceeding VFP capacity limiits, with the very helpful exception of a very good match of varchar(max) to memo.

So when you design a SQL Server backend for VFP you can use varchar(max), there's even no automatic mapping problem once you know CURSORSETPROP("MapVarchars", .T., 0) avoids mapping to the completely out of any sensible bounds and clearly invalid C(0). After all these years I have still not found anyone pointing out how that mapping is made and whether it's more a fault of the ODBC driver or VFPs layer of talking to ODBC drivers, mistaking something ODBC decides for C(0) or whether its VFP deciding for C(0) for some incomprehensible reason.

I mean, it's not unusual to use 0 as meaning "unbound", when you look at how textbocx.maxlength is defined, 0 is it's default value and means no max length. VFP should just know by itself, that c(0) makes less than no sense. Well, okay, it just makes no sense. "Less than no sense" should be reserved for C(-1) or less.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Olaf said:
When you define a dbf just having a c(254) field, the RecSize is 255.

I assume that's because you need one byte for the deletion flag. That said ...

file structure definition is wasteful with the full byte for the deleted flag.

The whole DBF file structure is wasteful - especially the pre-Visual parts of it. The Numeric data type uses a whole byte for each decimal digit, for the decimal point and for the sign. The Logical data type uses a whole byte for what could be held in a single bit. And so on.

I guess the reason was that the structure was designed to be easy to build a language around it. And both the structure and the language were the work of a lone programmer, working in his spare time, using an 8-bit microcomputer with 48K of RAM which he had assembled himself. We are living with the consequences still.

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike, very good point about the N() field storage as decimals. I have no idea how that loaded into memory variables. Today all VFP numeric variables are double float, the only place VFP truly has integers is in integer fields. But I see merit in the N() field definition as theoretically keeping decimal precision.

Your observation for RecSize reasons are correct, yes, one byte always comes from the deleted flag. So when using the "full feature set" of what char fields allow, ie to also be nullable, you end up at the "magic" 256 bytes for a c(254) record. I mean, there's one more thing you can define for the whole table: The codepage. But that's not changing the byte length of the field and record, the codepage decision ends up in the dbf header anyway, neither in the actual field storage nor in the field definition.

That a record structure of a single nullable char(254) field table has 256 bytes RecSize is no proof this was the reason for that limit, though, maybe not even an indication.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Okay, if nullable fields only came in later, this all just becomes a coincidence. Maybe it has no real reason, maybe it's just a joke.
But for whatever reason, you only get the "DataType property for field is invalid" for C(255) because it is in itself invalid, not by failing conversion from varchar(max) to c(255) or reasons along these lines.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Allo Olaf and Mike

I’m sorry I was not clear enough: my intent was not to use a character field of 255 length (or 254…) I try to avoid using VFP character type because it pad the field with space, so it’s very annoying when you try to concatenate, find, seek etc… ALLTRIM is my friend but it needs more coding.

Like I said in my first post, I am familiar with VFP and VFP tables, and with SQL Server, with .NET, but I never mixed both. So, when I tried to create a remote view with the VFP QBE and it map the varchar(max) SQL field as C(0), I didn’t understand. I think it is a QBE bug.

Still, you all were very helpful, and I‘m always amazed that you guys try to get the best of VFP, even if less and less people use it.

Thanks again and have a nice day

Nro
 
Okay, Nro,

that's now very clear, it leaves me wondering why you tried C(255) anyway:
Nro said:
In my program, just before opening the remote view, I mapped the varchar(max) as C(255)

But it was already clear you mapped it to M then, same post, I noticed that, no problem. Still confused why you first tried C(255) if you didn't aim for it.
The rest was just chatter and tech talk about VFP.

And I don't think C(0) comes from query by example, this is a well known ODBC bug happening especially with some newer ODBC driver versions. The SQL Server ODBC driver (SQLSrv32.DLL) does map it correctly to M that way, but VFP can be guided, as we showcased here I think in enough detail. It also suggests, that this isn't a VFP internal bug, as VFP can map it correctly using a good driver.

But my personal suspicion is the "gult" is split, ODBC itself changed and VFP's ODBC usage may be outdated in some aspects. You find assertions that ODBC 3.x drivers are compatible, yet not ideal when used with the ODBC 2.x interface. So there is some downward compatibility with drivers. And indeed we could investigate a bit deeper, as you can determine what ODBC functions VFP uses from an ODBC driver and see whether that points to some issues newer drivers have just with their downward compatibility being a bit imperfect. As we know VFP behaves differently when MapVarchar is set, there's a chance you find a difference in ODBC driver addressing with this set .T. or .F.

Sorry for the broad discussion about C(255), but you brought it up, even though at the same time you said you changed that to M already. I still wondered why you tried.
Well, case closed.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top