Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Timeout Expired 1

Status
Not open for further replies.

sherly

Programmer
Apr 18, 2002
14
0
0
MY
Hello, I wrote some scripts using ASP to update the SQL Server 2000.
I have include the following codes at my ASP :
Response.Buffer = "True"
Response.Flush
Server.ScriptTimeOut =3600

But I keep getting this message displayed before the update query is completed

Microsoft OLE DB Provider for ODBC Drivers error '80040e31'

[Microsoft][ODBC SQL Server Driver]Timeout expired

I have set the query time out property in my SQL Server 2000 to 3600 sec.
The script keep timing out at a very short time.
What should I do?
 
You please write the update query
using response.write and end the execution by
response.end

now the query will be displayed on browser
and u can cut and run the query in SQL Server Query Analyzer
and check whether query is ok.
Check also connection properties
 
I have tried to execute the query using SQL Enterprise. It also give me the timeout response. What should I do beside fixing my query?
 
Hi there,

<rajeessh> is suggesting that your capture your query command (with a response.write), and post it here so that we can take a look at it, maybe see if anything looks suspicious.

Whenever I have had ASP timeouts, it's almost always because my query is doing something it shouldn't - like either a big table scan, or (more often) a subquery is looping around over and over again on me, something like that.

That might not be your problem at all, but the query is an easy place to start: get it ruled out before we start looking for less obvious things.

bp
 
Fixing your query or the tables is usually the place to start when solving performance problems. Is your table indexed? Does the query reference indexed columns in Where or Join clauses? If there is no Where criteria, how many rows are being updated? You may be attempting to update too many rows in one query.

Have you monitored SQL Server to determine if locking or blocking is occurring? What is the SQL Server at network activity level?

Check here for more info and help with performance tuning.

Terry L. Broadbent - DBA
Computing Links:
faq183-874 contains &quot;Suggestions for Getting Quick and Appropriate Answers&quot; to your questions.
 
If you have not having problem with your query try
debugging line by line
The ASP loops may be caught indefinite
like no Rst.MoveNext in as while Not Rst.EOF
Such things happen mpostly.
So pl try and catch it up
 
sherly:

rajeessh's suggestion is very good. It may not be the sql statement at all; the ASP code that processes the record set may be looping through the rs repeatedly, if there is no rs.MoveNext statement.


tbroadbent:

>>You may be attempting to update
>>too many rows in one query.

This is also a good suggestion, although I don't think I'm clear on what you might mean. i.e. What is 'too many'? Can you elaborate for me? tia
 
I had two thoughts when I wrote &quot;too many.&quot;

1) When there is no Where clause or the Where clause is not correct, &quot;too many&quot; rows may may be updated meaning more than the developer or user intended.

2) Updating 100K+ rows in a DTS package or batch process is usually accceptable. 10K may be &quot;too many&quot; rows to update from ASP because of timeout values and user expectations of quick response. Terry L. Broadbent - DBA
Computing Links:
faq183-874 contains &quot;Suggestions for Getting Quick and Appropriate Answers&quot; to your questions.
 
re: too many

I don't expect this is sherly's problem, but I am wondering in general if there are situations where it is possible to blow the server resources, i.e. there just isn't enough RAM to perform a set-based update to 10 million records (or whatever).

bp
 
I had the same problem. I think that it's due to the &quot;Query wait&quot; parameter of youd database: this parameter is set to -1 by default, that is the query wait for about 25 times the estimated execution time to retain the necessary resources, then it go in timeout (look BOL for details).
If you execute these commands over your database you'll be able to read the actual value:

EXEC sp_configure 'show advanced option', '1'
RECONFIGURE
EXEC sp_configure

to change the value execute (to set the query wait to 10 min):
EXEC sp_configure 'query wait', '600'
RECONFIGURE WITH OVERRIDE

then run again the first set of commands to check for the change.

In my case it worked... good luck!
 
In general, the main problem that I've encountered with massive updates is the amount disk space and time required to grow the transaction log. If you update 10M rows in one batch, SQL server must retain 10M updates in the transaction log until the updates are committed or rolled back. In some cases, the log can become large than the database.

I recommend that updates, inserts and deletes be done in smaller batches of 10K, 20K or 50K rows per transaction. This allows SQL Server to commit the updates and clear the log. The log file will not grow &quot;astronomically&quot; when batches are smaller. 200 batches of 50K rows will usually run faster than 1 batch of 10M rows.

One of the options in BCP Bulk Insert, and DTS is the batch size. I recommend using batch size options with these tools in order to reduce the growth of the log file.

SQL BOL says, &quot;When bulk copying large data files into an instance of SQL Server, it is possible for the transaction log to fill before the bulk copy is complete, even if the row inserts are not logged, from the extent allocation logging. In this situation, enlarge the transaction log, allow it to grow automatically or perform the bulk copy using the -b or BATCHSIZE switch, and set the recovery model to simple. Because only committed transactions can be truncated, this option does not free up space during the bulk copy operation if the -b switch is not used; the entire operation is logged as a single transaction.

The same recommendation applies to SQL 7. Many people believe that setting the &quot;trunc. log on checkpoint&quot; option will inhibit log growth. Bu this is untrue because only committed transactions can be truncated. The log will continue to grow until the transaction completes.

Another issue that often arises when performing massive updates is running out of locks in SQL Server. Even though SQL 7 and 2000 dynamically handle locks and lock escalation, occasionally SQL will send an out of locks error message. More frequently, the need to acquire a large number of locks slows the update process. I usually use a tablock or tablockx hint when updating so the table is immediately locked and SQL doesn't have to acquire hundreds of row or page locks. Terry L. Broadbent - DBA
Computing Links:
faq183-874 contains &quot;Suggestions for Getting Quick and Appropriate Answers&quot; to your questions.
 
That was quite interesting and informative. thanx.

>>200 batches of 50K rows will usually
>>run faster than 1 batch of 10M rows.
I did not know that.

bp
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top