I need to import a .csv file into a table. The file however is not straight forward. It is divided into sections, each section has a 4-line header. The first line is a series of dash characters, the second is a long string that describes the contents of that section of the report and the third is another line of dash characters, the fourth are the actual column headers.
In between each section are the actual rows of data that I need. I dont need any of these header rows, only the actual data itself.
This pattern is repeated a number of times in the file. If I try to do a straight import I get very weird results, the columns don't match up, etc.
This needs to be an automated process (a DTS package) that runs once a week and processes the file (the name is always the same and the structure is always the same)
Can anyone suggest a way to get around this? I thought maybe using VB to go through the file and get rid of the rows but I am not a VB programmer though if required I think I could learn enough to accomplish the task reasonably well.
In between each section are the actual rows of data that I need. I dont need any of these header rows, only the actual data itself.
This pattern is repeated a number of times in the file. If I try to do a straight import I get very weird results, the columns don't match up, etc.
This needs to be an automated process (a DTS package) that runs once a week and processes the file (the name is always the same and the structure is always the same)
Can anyone suggest a way to get around this? I thought maybe using VB to go through the file and get rid of the rows but I am not a VB programmer though if required I think I could learn enough to accomplish the task reasonably well.