Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Andrzejek on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL: Too many columns referenced (Error 1841)

Status
Not open for further replies.

montypython1

Technical User
Jan 12, 2005
187
US
Greetings,

I need to import some raw data (CSV formatted) into a VFP table, but my raw data contains about 280 fields (or columns). I am able to import the first 255 fields without any problem, but I receive the error message
SQL: Too many columns referenced
when I try to import more than 255 fields. I still need some of the data within the next group of data (fields 256 - 280).

Is there a way to append data from a CSV file, starting at a specific field position (ex: field #256)?
Then, I could have 2 separate tables for the raw data, and combine just the fields that I need into my new working table (which would contain only about 150 fields).

Thanks,
Dave
 
No, you can't do that, see thread184-1786753

So either import that to XLS as suggested there and get your data via ole automation (as you also won't be able to append from xls) or query the xls via excel oledb provider or read the csv into an SQL Server table and query that or go for parsing the csv file.

The simplest option surely is using SQL Server, then query field1-255 into one cursor and field256-280 into a second. (or split up at 254, when any field is nullable the VFP limit reduces by one field).

Bye, Olaf.

Olaf Doschke Software Engineering
 
The error message indicates you are using SELECT - SQL to import the data. You could try using APPEND FROM instead. You will still be limited to 255 fields, but you can use the FIELDS clause to specify which fields you want to import into each of two separate tables.

In other words,something like this:

Code:
SELECT TargetTable1
APPEND FROM MyCSV.csv FIELDS F1, F2, F3, ...., F255 TYPE CSV
SELECT TargetTable2
APPEND FROM MyCSV.csv FIELDS F256, F257, ..., F280 TYPE CSV

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
I just tried that and you get data into the fields you specify, but in the CSV file start from scratch, i.e. the FIELDS clause does specify target fields in the DBF or cursor you append to, but not source fields in the CSV, even though the CSV file is parsed with first line interpreted as field names and data only taken from row 2 onwards, you don't match CSV file fields to DBF fields that way.

Code:
Local lcFields, lnPos, lcFields2
lcFields = ''
For i = 1 to 510
   lcFields = lcFields + ',f'+transform(i)+' c(254)'
   If i = 255
      lcFields = Substr(lcFields,2)
      Create Cursor c1 (&lcFields)
      lnPos = Len(lcFields)+2
   EndIf 
EndFor i

lcFields2 = Substr(lcFields,lnPos)
Create Cursor c2 (&lcFields2)

lcFields = Strtran(lcFields," c(254)","")
StrToFile(lcFields+Chr(13)+Chr(10),'d:\temp\data.csv',0)
StrToFile(lcFields,'d:\temp\data.csv',1)

lcFields2 = Strtran(lcFields2," c(254)","") 
Select c1
Append from 'd:\temp\data.csv' type csv
Select c2
Append from 'd:\temp\data.csv' fields &lcFields2 type csv
End result: value 'f1' of CSV field f1 is appended to DBF field f1 from the first append and to DBF field f256 from the sedond append.
Append always starts from left in the CSV file. Even if, it would need one with field names in row 1, often enough CSV files are not what VFP defines as them, just delimited pure data. Besides VFP has a bad specification of CSV in regard to quote delimited field values with carriage returns, but that's a topic on its own.

I'd really go for CSV import into another DB or Excel because they handle CSV better than VFP anyway.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Hi Olaf and Mike,

Thank you both for your guidance.
I am very interested in utilizing the APPEND FROM xyz FIELDS (fieldlist) as you suggested.

Mike ...
If I understood correctly, since my data has column headers in row#1, I should be able to select the second table, and then append the remaining fields using FIELDS (fieldlist, ..., ... ). Does this require my data's column headers to match the field names within VFP?

Olaf ...
I agree with you that it would be easier to utilize another DB. Unfortunately, in this situation, I don't have access to another DB. My raw data does indeed contain field names, but some of these field names contain characters that VFP doesn't like. In the past, when using APPEND FROM, my raw data is simply added sequentially (as you mentioned).

Again, thank you both for your suggestions.
I will try them out when I get to my office in the morning.

Thanks,
Dave Higgins
 
Dave, it won't work out, even if the field names in the CSV are also valid in VFP. So if you neither have any other DB nor Excel you'll need to end up with file parsing.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Dave, keep in mind that the limit of 255 is the maximum number of fields in the table. It makes no difference which method you use to populate the table.

For a different approach, do you have any control over the process which creates the CSV? If so, that might be where the solution lies. Can you arrange for that process to either eliminate any unused fields or break its output into two separate files?

Or, is there some way of uniquely identifying any particular field within the CSCV? For example, let's suppose that field 100 always contains a dollar sign and no other field does. In that case, you could use FOPEN() and FREAD() to read each line in turn into a memory variable. Then use STREXTRACT() to extract the first 99 fields; use ALINES() to get those fields into an array; then APPEND FROM ARRAY to get the fields into your table; then repeat for the remaining lines; then repeat for fields 100 to the end (into a different table).

Of course, the chances are that you don't have exactly one field that contains a dollar sign. I was just giving that as an example. But there might be some other characteristic of the data that you could exploit in a similar way.

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
On reflection, there's a better way, that doesn't depend on having any special characteristics in file.

1. Use FOPEN() and FREAD() to read the first line of the file into a variable (call it x).

2. Use [tt]ALINES(ArrayName, x, 0 ",")[/tt] to copy the fields to an array. Each row of the array will then contain one field from the first line of the file. This is not subject to the limit of 255.

3. Append a blank record to your table, then copy the first 255 of the elements of the array to that new record. You can use REPLACE for that.

4. Repeat step 3, but this time copy the remaining fields to a different target table.

5. Repeat for each line in the CSV.

Going further, if there are 24 or more fields that you know you don't want, simple refrain from copying them from the array to the table. That way, you will be able to eliminate the second table.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
There's a good idea in trying to get this to run with a single cursor, if you know which fields you don't need, but still not a working idea. The max 255 fields you specify they will be populated from CSV from start to maximum 255 fields. When you skip field names in the append FIELDS clause that doesn't skip fields within the CSV line, so you still don't get to the last 24 fields of the CSV.

If you can't use another DB or Excel as a helper, you have to go for parsing. Yes, that means low-level file functions. It's easy as long as there are no single multiline values. If there are you have problems with append, too, anyway.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top