This one is a bit difficult to describe so pls be patient.
I have a program that reads up to approx 100 spreadsheets into an access database. The system first loads the spreadsheet into a generic table that mimics the spreadsheet (same columns and rows). Then it uses another table that defines the rules for the spreadsheet -- that is, it identifies where certain data is found on the spreadsheet, so it can be loaded to the appropriate table in the database.
The problem I am having is I get errors such as "data mismatch". But these errors are not repeatable. If I load the same exact data a second time, the same error does not occur. Or it may reoccur, but not in the same place. When I work in debug mode, stepping through the code, I don't get any errors. This makes me think the problem has something to do with the speed of the processor, or some kind of memory overload.
Could the code be somehow getting ahead of itself? Has anyone ever seen this problem? Any suggestions how to handle it?
I have a program that reads up to approx 100 spreadsheets into an access database. The system first loads the spreadsheet into a generic table that mimics the spreadsheet (same columns and rows). Then it uses another table that defines the rules for the spreadsheet -- that is, it identifies where certain data is found on the spreadsheet, so it can be loaded to the appropriate table in the database.
The problem I am having is I get errors such as "data mismatch". But these errors are not repeatable. If I load the same exact data a second time, the same error does not occur. Or it may reoccur, but not in the same place. When I work in debug mode, stepping through the code, I don't get any errors. This makes me think the problem has something to do with the speed of the processor, or some kind of memory overload.
Could the code be somehow getting ahead of itself? Has anyone ever seen this problem? Any suggestions how to handle it?