From my own experience, taking 4 minutes to read/split the (43 MByte) text file is quite long, it could be an indication of a older system, one with inadequate memory resources or Disc Storage (availabulity OR fragmentation), I have used the read / split approach on files of ~ 50MBytes and 'finished' the process in a few seconds. A further advantage of the approach is that the entire content of the file is, even after the first breakdown seperated into records. If the source file is a CSV (or any regularly delimited) file, one loop with split can further break the records into fields.
A general rule of processing is to 'observe' the requirements of each process and select ones which are 'suitable' to the task. In most cases, I/O processing is an order of magnitude SLOWER than memory processing. A lage part of the I/O processing for routines like "Line Input" is the delay in the disc finding the information Since - in the 'Line Input' method, this occurs once for each record, it usually becomes the dominant factor in the time requirements of reading a large file. Reading large amounts of 'data' into memory often results the use of virtual memory -which is actually "disc" storage, however MS / Win appears to have this facility nicely optimized and the impact on operations are significantly less than the 'Brute Force" I/O methods.
My suggestion [pcolor blue]Hypetia[/color]'s code was not intended to be used / implemented to provide information for hte progress bar, but to generally avoid the need / use of it. Adding one additional level of processing of the array of string provided by the routine provides the entire recordset broken down to the individual fields, all reaqdy for further verification and validation before adding to a recordset.
If, on the other hand, the system requires FOUR minutes to read the mere 43 MByte file, I think there are additional issues to be investigated, and suggest that a minor variation of [pcolor blue]CajunCenturion[/color]'s code is appropiate for the progress bar -while the further investigation is being conducted.
Almost done here = then I went to check he is saying 43,854,363 KB. - that's not in the range of MBytes - but GBytes! I created a 50 MBytes
string requiting (by CRUDE ateDiff Calc) ~ 1 Second. Attempts to create a "string" of 50GBytes using either String or Space Fail, so either the File size is WAY larger than advertized (by * 1000?) or the read the whole thing into the string created fo it is a NO-Go! similarly perplexing, it the 'aparent record size. Using the give (43 GBytes, and the ~ 1.2 M would result in rather HUGE records (~ 35 KBytes each or rather small ones (35 BYTES each). Of course, these would be the 'average' for a CSV file, but they would still represent some additional challenges in recordset processing.
In closing, I must admit to not knowing much about the inner working of the 'FSO' object, so I cannot objectively comment on it's relative efficiency in I/O processing. It may quite well be more appropiate than the method I proposed, on the otherhand, my experience with MS. is that every convenience comes at a price, USUALLY, at least a part of the price is in the time required to accomplish the task. FSO does add some convenience. I still do not know the 'price'. Fortunatly, I do not need to deal with 43 GByte files this week. In my present state of un-employment, I do not even need to deal with 43 MByte of any thing, and do not actually have any such file lying about to do any 'research' with, so the points are -for me- at best academic.
MichaelRed
m.red@att.net
Searching for employment in all the wrong places