Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

read *.TXT files into Dbase5

Status
Not open for further replies.

juergenk

Technical User
Jun 11, 2003
4
US
Itry to import *.TXT file delimited with ";" into Dbase *.DBF; it works fine with append from file.txt delimited with "
but filedelimiter are set to ";" at the *.TXT; who can help ??
 
I've not used dBase 5 but earlier versions used the command SET DELIMITER TO to tell the append command what the delimter was. So, try something like this:

SET DELIMTER TO ";"
APPEND FROM <textfile> DELIMITED

Another way to &quot;suck&quot; a text file into a table is with the APPEND <filename> TYPE SDF. A text file is said to be in SDF (System Data Format) when there is no delimiter AND each line of text represents one table record.

I've found that SDF is soooooo much faster than any other method of getting a text file into a dbf file.

There's always a better way...
 
Thanks for the response! I tryed it, ran without syntx error, but result was not as expected (not set into the proper fields).
Also append from xyz.txt type SDF does non work here, because fields are not at fixed lenght

the original prog listed below.
Any other Ideas ??

*** import with Delimiters
use men_5991y.dbf
zap
set delimiters to &quot;;&quot;
set delimiters on
append from C:\men5991y.txt delimited
set delimiters to default
 
What results are you getting?



There's always a better way...
 
Just a quick thought...

if the default delimiter if dBase 5 is &quot; &quot;, then you could do a search and replace pn your text file -> replace &quot;;&quot; with &quot; &quot;.

There's always a better way...
 
I don't think SET DELIMITERS TO/ON/OFF has anything to do with appending records in dBase 5. The book says that's for @ SAY/GET.

Samples I tried:

Code:
APPEND FROM x.txt DELIMITED
APPEND FROM x.txt DELIMITED WITH &quot;
APPEND FROM x.txt DELIMITED WITH _

The first and second sample commands above both expect each incoming field to be surrounded by double quotations (or not) and separated by commas. The third sample command expects each incoming field to be surrounded by an underscore (or not) but still separated by commas. Using the DELIMITED command is simply a way to tell the program to ignore certain characters surrounding each field but still seems to force the programmer into separating each field by commas.

Do you know whether there is a maximum length for these lines in the text file? Knowing that would help determine how to write a workaround using APPEND...TYPE SDF (by appending the lines into a large field and then programmatically parsing them into their own fields after the append.
 
Hi friends, thanks for the reply, I was stuggeling arround with that, realised, that Dbase indeed did ignore the seperators. at least I think, that I have to put the file in with &quot;SDF&quot; and than separate each field. Bad Job at 35 fields. How ever Thanks a lot
Juergen
 
Juergen,

If all 35 fields have known lengths/sizes (fld 1 =10 chars, fld 2 = 25 chars, etc.), then SDF is the way to go. Here's an example of what a line of the text file would look like:

john doe 123somestreet somecity somestate 12345

the fields are:

fname = john (width = 10)
lname = doe (width = 15)
address = 123 some street (width = 20)
city = somecity (width = 20)
state = somestate (width = 20)
zip = 12345 (width = 5)

Note the spaces - if you count each character (including the spaces), you'll see that they match with each field spec. This is the beauty, and curse, of SDF. SDF doesn't care what the field is (numeric, char, date, etc). SDF just starts putting characters into the fields of a record. When all the fields are full, it starts a new record.

If you look at your text file with a DOS text editor and each line of text is exactly the same length, then you have an ideal file for SDF!

There's always a better way...
 
To clarify what tviman said, &quot;When all the fields are full, it starts a new record.&quot; This does not mean that if a line being appended is too long for the records that it creates extra record(s) to finish processing the line. No. If you have 20 test lines, you will get 20 records. Any excess data on the text lines will be truncated and &quot;lost&quot;.

If the lines contain fields that are always equal length , then you can very easily use APPEND...TYPE SDF with minimal processing. Simply please small fields in between each real field so that they end up holding the &quot;junk&quot; semicolons. Then when finished you can recopy the table to another without the &quot;junk&quot; fields.

Code:
FieldName Type Width
Name_1    Char 30
JUNK_1    Char  1
Date_2    Char 10
JUNK_2    Char  1
Addr_3    Char 40
JUNK_3    Char  1
City_4    Char 40
JUNK_4    Char  1
Area_5    Char 40
and so on...

On the other hand, if the lines or fields are of variable length, you can still use APPEND...TYPE SDF. One way is to have one or more huge &quot;raw data&quot; fields as the first fields in the records, then followed by the &quot;real&quot; fields. Since the maximum field size is 254 characters, if any text lines could be longer than that, you would need more than one such raw data field. Then parse the raw data (the AT() function is very useful) and one by one programmatically place each field's data into the &quot;real&quot; fields. Then when finished you can recopy the table to another without the &quot;raw data&quot; fields. (The other way is to parse the raw data and write them into a separate table with the valid fields, that way you don't have to recopy the table to another to get rid of the raw data fields.)
 
dbMark...

I beg to differ... when using APPEND... TYPE SDF, if the total number of characters per text file line is greater than the total length of the fields per record, the next record will contain what wasn't placed in the previous record. SDF DOES NOT TRUNCATE!

Let's say that the total line length is 100 characters and the total of the fields is 95 characters, the next record will start with the last five characters of the previous text line. Think of it as &quot;pouring&quot; the text file data into an empty database. SDF will fill ALL FIELDS COMPLETELY - there will never be a field whose length is, for example, 10 characters, that only gets filled with 5 charaters! (Unless it's the LAST record of a table where the field lenghts were not properly sized.)

This can be the curse of using SDF. It can take many attempts at getting the field lengths correct before reaching success. This is true ESPECIALLY when the text file is derived from a table where some of the fields were null, or otherwise empty. For this reason fields of variable length cannot be used with SDF.

However, once you've correctly established the field lengths, appending a 100 megabyte text file will take just a few seconds - literally!

There's always a better way...
 
tviman,
I have dBase 5.0 for DOS and I tested it before I wrote my prior post. That said, maybe I have something in my dBase environment SET differently than yours or a different version, because I appended type SDF with long lines into a table with small records and it did not wrap any extra lines. What can I say?

juergenk, just make sure you test these behaviors for yourself, as these helpful hints are just that, hintful helps.
 
great to read al the comments, and it says, that You all are experts with DBase.
Just to claryfy: I am working with Version 5.15
reading textfiles with SDF works fine for severals files I import, also no problem to import up to 2.5 milion records; just sorting stops at 1 Milion.
In that special request the TXT fields are delimited by &quot;;&quot;. Fieldcontense is variable max 24 digits, min is 0. If 0 (means blank) just the delimter is sitting.
If I import that to EXCEL -delimited with &quot;;&quot; as standardfuncion- it works. I also tested changing of characters at the file with edit-function and different delimiters, Dbase ignores that. The delimiters are sitting at the midle of the field random distribution (depends of contense). With set delimiters to &quot;;&quot;, I realised, that the DBF-file shows the delimiter at the screen, so I suggest, that this function just effects the export-function, not the import.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top