Hello all - i have a problem someone might be able to help me with.
I am converting an access 2000 mdb with linked tables to another mdb to work with linked tables on Sql Server 2005 instead.
All has gone reasonably ok but now i am having a performance problem - something that is instantaneous on the existing system takes 1m 50 seconds on the new one.
I am reading 3 ado recordsets from the sql server then writing an ado recordset on the frontend with the following kind of processing
Running sql profiler trace on the sql database and stopping the vba in debug i saw that the .filter command on each recordset seemed to generate a select statement for each 'hit' recordset entry!!! - hence the performance issue.
According to various sources on the net .cachesize on the recordset should fix this - seems to have no effect at all.
Any ideas? Is there any way to force the caching of the recordsets?
I might be destined to rewrite it using arrays, populating them from the recordsets using getrows - seems a bit of backwards step to me.
Thanks in anticipation.
I am converting an access 2000 mdb with linked tables to another mdb to work with linked tables on Sql Server 2005 instead.
All has gone reasonably ok but now i am having a performance problem - something that is instantaneous on the existing system takes 1m 50 seconds on the new one.
I am reading 3 ado recordsets from the sql server then writing an ado recordset on the frontend with the following kind of processing
Code:
do until rst1.eof
rst2.filter = blah
do until rst2.eof
rst3.filter = blah
do until rst3.eof
.... processing - .addnew on the local recordset .update
rst3.movenext
loop
rst2.movenext
loop
rst1.movenext
loop
According to various sources on the net .cachesize on the recordset should fix this - seems to have no effect at all.
Any ideas? Is there any way to force the caching of the recordsets?
I might be destined to rewrite it using arrays, populating them from the recordsets using getrows - seems a bit of backwards step to me.
Thanks in anticipation.