Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Why am I getting a Type Conversion Error when importing text into a text field?

Status
Not open for further replies.

keun

Technical User
Jul 15, 2005
262
0
0
US
This is making me a little bit batty. I am getting to the point where I am about the blame the computer.

I have a really simple table that I am trying to import by appending the data to an existing table. The existing table is set up with the correct fields. I have one field that is causing problems. It has strings that contain numbers and also strings that contain text. Things like 897069 and C0082632. The fields with the "C" in them are kicking up a Type Conversion Error.

I isolated two records with values that cause errors and when I import them on their own I have no problems. But when I try to import the whole file I get errors - including on the two records that import fine on their own.

The file is about one million records. Is there something about the volume that is causing the errors?
 
Have you checked to see if there are any hidden characters in those strings?
 
Also, I assume you can import other records that have alpha characters? Are you appending using a query or code (i.e. RS.AddNew)
 
The file is a comma delimited text file that I am trying to bring into Access and append to an existing table.

What is crazy is that I can import the file into a new table by having the import process create a table. That works fine. Once it is in Access I can use an append query to append the table into the table that I originally import/appended to.

Get this: If I import the text file into Access and create a new table in the process, I get zero errors. But if I then import that SAME DATA into the same table that was just created I get the validation errors on the field.
 
I received these files as CSV. I need to replace the first row with different values. To do this I am opening them in notepad++ and making the edit and then saving as txt. This txt file is giving me problems. If I open the CSV in Excel and make the change, then save as CSV I get a clean import.

Riddle me that. I do not want to use Excel.
 
When I use notepad++ to edit the first line of the CSV and then save as TXT I get type conversion errors.
When I use Excel to edit the first line of the CSV and then save as CSV I get no errors.

So... type conversion error problems.
 
Are you using an import specification that explicitly specifies the datatype of the problem field as text?
 
Or if this is not code.... Are you specifying the datatype on import (this is where you would save it to use specification with your code, note the name)?
 
I am not using code, but I am willing to go that route if there is way to control this stuff.

I am using External Data -> Text File which opens the Get External Data dialog. I select the option to "Append a copy of the records" and I append to a table with the field datatypes specified.
 
Huh... never noticed that it doesn't give you the option to specify datatype... Then again I usually just import programmatically.
One last thought, the field in the target table is in fact text, right (I suspect it is)?

The issue I think your experiencing is that Access only scans x rows to guess at the datatype. If you don't have text near the top row, it guesses wrong. I'm also guessing your CSV data does not have a double quote qualifier for text... So if you hit advanced button and change the text qualifier to none it might work (when excel saves a CSV it does use double quotes as a text qualifer which would disambiguate it for Access).


Otherwise...
If you import to a NEW table and hit the advanced button you can specify the datatypes...
Then use the Save As button there to save your Specification (copy/note the name)

Then use Docmd.transfertext to import your data using your specification defined above... The help topic should get you there.


Related to the guessing there is some sort of scanrows registry key for the Excel import (ISAM), I'm guessing it exists for the Text ISAM as well but you'd also have to change it on every machine or everytime you get a new computer which is why I suggest the code solution.
 
Worth a mention...

Since this is about text file import issues, Access has a fit if each record does not end with both a CRLF. Obviously you could do the search and replace with Notepad++ but a quick and dirty way is to open and save with Wordpad.
 
Just a question... which row has the first instance of the alpha characters? When Access imports CSV, it looks at the first 25 rows to determine the data type of the field.

See the following for some other pointers as well:
Data types
To avoid errors during importing, make sure each source field contains the same type of data in every line. Access scans the first 25 rows of a file to determine the data type of the fields in the table. We highly recommend that you make sure that the first 25 source rows do not mix values of different data types in any of the fields. Also make sure that non-text values that are to be treated as text values are enclosed in single or double quotation marks.

If the source file contains mixed values in the rows following the 25th row, the import operation might still skip them or convert them incorrectly. For troubleshooting information, see the section Troubleshoot missing or incorrect values, later in this article.
 
I understand that Access scans the rows to determine datatype on import but if that logic applies to import/append then it means this if the first 25 rows of my text column are numbers (which in my problem file is the case):

My column which has mixed numbers and text is getting loaded into a column that already contains data containing mixed numbers and text. Access scans my data, identifies the column as numbers and proceeds as if the destination is number even though, as stated, it is text and already contains a bunch of text. It writes the number data but when it gets to the text is throws up an error. Not because the destination is number but because the scan identified the source as numbers. That would be odd but would not surprise me. And it makes sense in light of the above link.

But I do not have to like it.
 
That's all about what I said but didn't know the row count... an import specification in code I expect should import regardless.
 
No comfort to you, but years ago we had a similar problem... I can't remember the exact solution, but possibly we placed a dummy record at the front to force the correct field type then after the import, added a little code that would delete the dummy record. Or.... maybe we created an 'Import/Export' specification? Age = Ugh!
 
BTW, when you OPEN a text file in Excel, DATA CAN CHANGE, because of assumptions that Excel makes relating to TEXT that may look like a date, a time or scientific notation, causing the TEXT to be converted to a NUMBER that in no way resembles the TEXT value.

For instance
[pre]
TEXT. RESULT
171E3 171000
1/2 41641
12: 0.5
[/pre]
The first is deemed to be scientific notation, next Jan 1 of the current year and lastly the time of 12:00 pm.

Skip,
[sub]
[glasses]Just traded in my OLD subtlety...
for a NUance![tongue][/sub]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top