Hi,
Sorry to get wordy but I needed to in order to explain.
I have a flat file which needs to be parsed. The file gets created from someplace which cannot be changed. It is 80 bytes long fixed length. The data wraps, so I can have part of my data on 1 line and continue on the next. Also I only need data from 1-70. 71-80 is not used, so my data actually wraps after char 70. I created a DTS to pull the data into an ACCESS table (ADP). I broke it into 2 fields, 1 of 70 and the other of 10. All relative simple stuff. Now the good part. I wrote a cursor using a ton of SUBSTRING verbs in order to parse the data. There are multi. field lengths, records types, etc. It is now working with the output being inserted into 2 tables. I basically fetch 1 record (of 70) and substring 8 or 13 depending upon a few rules. If the data is an order, I basically have an item and a qty. If it is a retail, I have 3 pieces of info. Orders create 1 table the retails another.
HERE IS THE PROBLEM. SPEED!! I runs like a dog. 297 input records creating 2227 item records takes 36 seconds.
No good. I can't imagine how long 80,000 item records would take. This is currently being done with a PC version of Cobol. It eats this stuff up and runs a 80,000 item in about 3 seconds.
I was thinking of trying to put the 70 char records together using a few DTS's and somehow get the data in parsed using the delimeter (+ sign). Is this feasible? Is the SUBSTRING a hog? I was thinking of an array, not sure if that would speed it up because eventually the record has to be written somewhere, whether it be a flat file or an ACCESS table. I was trying to get the each delimited piece into an actual record but DTS gets a little funky when trying to do this.
I am open to any suggestions..........
thanks,
Remember when... everything worked and there was a reason for it?
Sorry to get wordy but I needed to in order to explain.
I have a flat file which needs to be parsed. The file gets created from someplace which cannot be changed. It is 80 bytes long fixed length. The data wraps, so I can have part of my data on 1 line and continue on the next. Also I only need data from 1-70. 71-80 is not used, so my data actually wraps after char 70. I created a DTS to pull the data into an ACCESS table (ADP). I broke it into 2 fields, 1 of 70 and the other of 10. All relative simple stuff. Now the good part. I wrote a cursor using a ton of SUBSTRING verbs in order to parse the data. There are multi. field lengths, records types, etc. It is now working with the output being inserted into 2 tables. I basically fetch 1 record (of 70) and substring 8 or 13 depending upon a few rules. If the data is an order, I basically have an item and a qty. If it is a retail, I have 3 pieces of info. Orders create 1 table the retails another.
HERE IS THE PROBLEM. SPEED!! I runs like a dog. 297 input records creating 2227 item records takes 36 seconds.
No good. I can't imagine how long 80,000 item records would take. This is currently being done with a PC version of Cobol. It eats this stuff up and runs a 80,000 item in about 3 seconds.
I was thinking of trying to put the 70 char records together using a few DTS's and somehow get the data in parsed using the delimeter (+ sign). Is this feasible? Is the SUBSTRING a hog? I was thinking of an array, not sure if that would speed it up because eventually the record has to be written somewhere, whether it be a flat file or an ACCESS table. I was trying to get the each delimited piece into an actual record but DTS gets a little funky when trying to do this.
I am open to any suggestions..........
thanks,
Remember when... everything worked and there was a reason for it?