Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations John Tel on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Handling BOF/EOF when using GetRows() Method!

Status
Not open for further replies.

RhythmAddict112

Programmer
Jun 17, 2004
625
US
Hi all. Im using the GetRows method on my page mainly because I had to pull back a large amount of data and paging, either in a stored procedure or using ADO simply is not an option. So, I am using GetRows which works great, and fast. My issue is that if the query is run and there are no results, I need to gracefully display this. As of now, I just get this error:

Error Type:
ADODB.Recordset (0x800A0BCD)
Either BOF or EOF is True, or the current record has been deleted. Requested operation requires a current record.
/repstats.asp, line 147

Not cool. I know how you can easily handle BOF and EOF conditions (of course) using normal recordsets and ado however Im unsure how to accomplish simliar tasks using the GetRows method. Here is my code....(kind of long)

Thank you for your help in advance

Code:
' Declare our variables... always good practice!
Dim cnnGetRows ' ADO connection
Dim rstGetRows ' ADO recordset
Dim strDBPath ' Path to our Access DB (*.mdb) file
Dim arrDBData ' Array that we dump all the data into

' Temp vars to speed up looping over the array
Dim I, J
Dim iRecFirst, iRecLast
Dim iFieldFirst, iFieldLast

' MapPath to our mdb file's physical path.
'strDBPath = Server.MapPath("db_scratch.mdb")

' Create a Connection using OLE DB
Set cnnGetRows = Server.CreateObject("ADODB.Connection")

' This line is for the Access sample database:
'cnnGetRows.Open "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & strDBPath & ";"

' We're actually using SQL Server so we use this line instead.
' Comment this line out and uncomment the Access one above to
' play with the script on your own server.
cnnGetRows.Open Application("OWR_ConnectionString")

' Execute a simple query using the connection object.
' Store the resulting recordset in our variable.
Set rstGetRows = cnnGetRows.Execute("SELECT distinct CREQNAME,"_
&" SUM(CASE WHEN (dSubmit between ('01-JAN-2004') and ('31-JAN-2004'))"_
&" OR (dApprRej between ('01-JAN-2004') and ('31-JAN-2004'))"_
&" AND (CREGION = 'NY')"_
&" AND (CCHANNEL = 2)"_
&"THEN 1 ELSE 0 END) Submitted,"_
&" SUM(CASE WHEN (iApprStatus=-1)"_
&" AND (iAppr2Status IS NULL OR iAppr2Status=-1)"_
&" AND (iBatch Is Null OR iBatch > 0)"_
&" AND (CREGION = 'NY')"_
&" AND (CCHANNEL = 2)"_
&"THEN 1 ELSE 0 END) Approved,"_
&" SUM(CASE WHEN (iApprStatus=0)"_
&" AND (iAppr2Status IS NULL OR iAppr2Status=0)"_
&" AND (iBatch Is Null OR iBatch > 0)"_
&" AND (CREGION = 'NY')"_
&" AND (CCHANNEL = 2)"_
&"THEN 1 ELSE 0 END) Rejected,"_
&" SUM(CASE WHEN (iApprStatus=0)"_
&" AND (iAppr2Status IS NULL OR iAppr2Status=-1)"_
&" AND (length(cAppr2ID)>0) AND (iAppr2Status Is Null)"_
&" AND (CREGION = 'NY')"_
&" AND (CCHANNEL = 2)"_
&"THEN 1 ELSE 0 END) Pending"_
&" FROM tbl_OpenWindowRequests"_
&" WHERE dSubmit between ('01-JAN-2004') and ('31-JAN-2004')"_
&" OR dApprRej between ('01-JAN-2004') and ('31-JAN-2004')"_
&" GROUP BY creqname")

' Now this is where it gets interesting... Normally we'd do
' a loop of some sort until we ran into the last record in
' in the recordset. This time we're going to get all the data
' in one fell swoop and dump it into an array so we can
' disconnect from the DB as quickly as possible.
arrDBData = rstGetRows.GetRows()

' Some notes about .GetRows:
' The Method actually takes up to 3 optional arguments:
' 1. Rows - A long integer indicating the number of rows to
' retreive from the data source and put into the
' array. Defaults to adGetRowsRest which
' retreives all the remaining rows.
' 2. Start - An ADO bookmark indicating which row we should
' begin from. It can also be one of the following
' three ADO constants: adBookmarkCurrent,
' adBookmarkFirst, adBookmarkLast. Defaults to
' the current row, so if you've been moving around
' the RS it'll pick up wherever you left off.
' 3. Fields - A single field name or number or an array of
' names or numbers indicating which fields to
' retreive and place into the array. Defaults to
' all the columns.
'
' So a example using all the attributes would look like this:
'
'arrDBData = rstGetRows.GetRows(2, adBookmarkCurrent, Array("id", "text_field"))
'
' Which would get 2 rows starting from the current record and
' only returning data from the the id and text_field fields.
'
' FYI: the above line uses an ADO constant from adovbs.inc
' which I haven't included in this script:
' Const adBookmarkCurrent = 0


' Close our recordset and connection and dispose of the objects.
' Notice that I'm able to do this before we even worry about
' displaying any of the data!
rstGetRows.Close
Set rstGetRows = Nothing
cnnGetRows.Close
Set cnnGetRows = Nothing

' ADO sets up the array so that the elements of the first
' dimension correspond to the DB fields and the elements of
' the second dimension correspond to the records. It might
' seem a little backward, but when you think about it, it
' makes sense because it's easy to modify the last dimension
' (to hold fewer or more records as needed), but modifying the
' first dimension is difficult so we use it to handle the
' fields which most likely wouldn't need to be modified.

' Here I get the upper and lower bounds of the field list
' and the records. This gives me some information about the
' data before I start and also allows me to not have to query
' the array for its bounds on each loop.
iRecFirst = LBound(arrDBData, 2)
iRecLast = UBound(arrDBData, 2)
iFieldFirst = LBound(arrDBData, 1)
iFieldLast = UBound(arrDBData, 1)

' Display a table of the data in the array.
' We loop through the array displaying the values.
%>
<table border="1">
<%
' Loop through the records (second dimension of the array)
For I = iRecFirst To iRecLast
' A table row for each record
Response.Write "<tr>" & vbCrLf

' Loop through the fields (first dimension of the array)
For J = iFieldFirst To iFieldLast
' A table cell for each field
Response.Write vbTab & "<td>" & arrDBData(J, I) & "</td>" & vbCrLf
Next ' J

Response.Write "</tr>" & vbCrLf
Next ' I
%>
</table>
<%
' That's all folks!
%>
 
Got it, all i've got to do is:

Code:
If RstGetRows.EOF
just in case anyone needs to know, thanks!
 
you can also use if ubound(rsgetrows,2) < 0 then EOF

[thumbsup2]DreX
aKa - Robert
if all else fails, light it on fire and do the happy dance!
 
Yup, I have a personal preference for the reversed version, but it all works out the same:
Code:
If Not rstGetRows.EOF Then
    arrDBData = rstGetRows.GetRows()
    'Etc.
Else
    'Error
End If
 
Im pretty new to using the getrows method, and like I said it's way faster. I'll probably stick to this as opposed to recordsets just for thes peed of it. Are there any advantages in using recordsets as opposed to getRows?
 
If your recordset was large enough, either in rows or columns (or column sizes/types), you could be saving more machine resources with a recordset.

If you had a lot of text columns, and most especially if you had any memo columns (or other very wide text columns) or BLOB (binary large object) columns, GetRows could be very, very bad. If you were returning thousands of rows of pretty much any type of column, GetRows could be bad. (I'm not even sure GetRows will work with BLOBs.)

Basically the machine has to allocate memory in number of rows times total width of the widest entry of all columns in bytes (or potentially double-bytes), so if you're asking for that big a chunk of memory times x simultaneous users, GetRows could bring everything to a crawl.

Yet if you have a small number of rows and columns, and fairly narrow columns, GetRows absolutely rules over recordsets.
 
Depends on what your doing. If your not just outputting data or your scared of learning a new way to do it and you only use RS's, then ... :)

Personally I always use GetRows when I am pulling back in excess of 20 rows. I never use Recordset.Open for anything, strictly connection.Execute. I always do my updates using SQL Inserts so I don't have to load in a whole crapload of data I won't be using (recordset), etc.

Basically it seems like they added a whole bunch of funtionality to the recordset object to make it easier on the end user (easier to write poorly performing code that is).


I'm not biter ;)
-T

barcode_1.gif
 
Interesting. The initial reason I even started using getRows() is because with the particular query I am using, using a r/s was WAY WAY too slow. Slow enough that my IIS settings did not allow for it and it timed out. Naturally I could have increased that, but basically it just wasn't acceptable. I am returning anywhere from 1 row to 2000+ rows onto an ASP page with the query I have. RS's just weren't cutting it. I went over to getRows() and my page load time dropped significantly (~3 seconds with 2k+ rows, 3 cols) So I think i'll be sticking to this. As far as inserts I usually end up using stored procs just because I need to grab that ID and update diff tables most of the time(using oracle)

I always do my updates using SQL Inserts so I don't have to load in a whole crapload of data I won't be using (recordset), etc.

Just to clarify, what do you mean by this?

Any other speed/efficiency tips? Id love to hear em.
thanks.
 
Firstly, I have even larger queries then the one RythmAddict is doing, bad enough that the browser takes 30+ seconds just to render the table of data, this times out every single time with a standard recordset object but with GetRows the server gets it done and out the door in under 5 seconds...Thats generally with some server-side processing too (only list first column when it changes, etc) and mixed text and decimal numbers.

Granted, I'm not sure how well it handles blobs, but no matter what the machine has already grabbed the resources for the recordset object. You will get a temporary spike when transferring to an array but you will free up a lot more resources by killing the Recordset object.

As far as how I do things:
I never ever ever use the Keyset or Dynamic cursors. These were designed with an applicaiton in mind where you might be interacting with the data over several minutes. When you just want to grab it and dump it, static or readonly is not only faster, but more realistic. With a Dynamic or Keyset Recordset you have multiple communications streams going back and forth, the Recordset has to keep allocating memory dynamically (ionstead of getting it all in one chunk), etc.

When I need data, I write an SQL Query for it and use the Conneciton objects .Execute to pull it back into a previously undeclared Recordset variable. The Execute method builds it's own Recordset object and gives you back a reference. Never instantiate a Recordset object and then assign the result of a Connection.Execute to it, your just wasting memory with your initial instantiation.

When I want to update data, I wrte an SQL Update command. Pulling it all back into a Recordset locally and looping through it (or even using the find) is a waste of resources, the database was designed to be able to accept updates, so I'll go with the system that was designed for the work (more optimized) instead of the COM object with to much overhead (recordset).

When I want to add data, it depends on what i'm doing. If the database supports transactions or I only have a few rows to insert, I'll use an Insert command and the Connection.Execute method. If I have a lot of data then I will use a disconnected recordset.


In the end, the most efficint manner of dealing with databases is by using Stored Procedures. This beats the previous methods because the database can pre-compile the query or command and just accept arguments. Much faster then interpreting it on the fly (SQL Query/Command) or simply returning all the data to the end user to change, then uploading the changes (Recordset object manipulation).

I mean, think about it. If your making changes to the recordset object it basically has to keep track of everything your doing then turn around and tell the db to do that same thing. Even at it's best this is probably justinternally creating SQL statements (or something similar) that it plans on sending to the db as a single transaction. It may take a little more code and organization for me to do it, but cuts out the overhead of the Recordset object doing it every single time on the fly.


Other methods of optimization include:
Don't use *'s: When you query with a * the database has to do a pre-query to figure out what field's it's dealing with, then it can do your wuery and return the records. I think I can list them out one time in the code instead of making the db do it every single time.

Cross Joins: Cross Joins (or comma-delimited lists of tables in the FROM clause of an SQL Statement) are bad. bad. Don't do them. Basically it works like this. The FROM section is going to be placed into a temporary table and then have the filters in the WHERE section executed on it. If you cross 3 tables then the temporary table starts out as being every combination of table A, Table B, and Table C. So if each table has 100 records in it, you end up with a starting temporary table of 1,000,000 records. Ok, so say you use something like an INNER JOIN where you have to specify a relationship. If those three tables all have 1-to-1 relationships then your looking at 100 records in your temporary table. Obviously they could be 1-to-many relationships but you still aren't going to end up with 1,000,000 records in your temp table. When you use Cross Joins your wasting resources with an absolute worst case. Inner Joins, Outer Joins, etc and Join that requires reltionships to be established is going to use less resources to build that initial temporary table.
Then when it comes time to actually apply the WHERE statement there are going to be less records to work with, making even less work for the db

If you don't need it, don't ask for it: This kind of goes hand in hand with the 'Don't use *'s' one. If you don't need a field to be in your recordset, don't ask for it. All your doing is increasing the amount of data that has to be communicated between your DB and application, increasing the amount of memory the db needs when creating and operating with the temporary table, and increasing the amount of memory needed by your application's recordset (or later array). Why waste resources if you don't need the data?

Do as much with as alittle as you can: If you need to update 20 rows of data and can do it in an update statement, then your using a lot less resources then opening a recordset, finding all 20, changing their values, and submitting it back.
If your database allows you to send multiple commands in a single query then doing 5 inserts in one string is going to be better then sending 5 seperate inserts (communications overhead, etc).


Anyways, theres a lot more info in the FAQs and I tyhink there is a thread floating around somewhere on ADO optimization, maybe someone who doesn't have to go get their oil changed can find it :)

-T

ps: No really, I have to get my oil changed, 3pm appointment, it's not an excuse, honest :p

barcode_1.gif
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top