CHeighlund
Programmer
I am working with a program designed to hit a remote database. Due to the requested format for information being passed to the user, I've been running tests across the full database range, and I've run across an interesting little problem. My first test always runs effectively, but a second test, on a different subset of the db, done immediately afterwards will fail with an out of memory error - if the first result set was large enough.
To me, this seems to indicate that I'm requesting memory and not having it freed up when the run ends. I admit what I'm testing over isn't likely to be used by the end user, but 'not likely' isn't the same as 'never going to happen' and I'd like to be prepared if it does. Does anyone have any suggestions as to how I would force this memory to be freed up?
To me, this seems to indicate that I'm requesting memory and not having it freed up when the run ends. I admit what I'm testing over isn't likely to be used by the end user, but 'not likely' isn't the same as 'never going to happen' and I'd like to be prepared if it does. Does anyone have any suggestions as to how I would force this memory to be freed up?