Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

250+ unique TQL requests, best way to go about it?

Status
Not open for further replies.

keyser456

IS-IT--Management
Nov 21, 2003
73
US
We've basically built a rule engine here. Rules are built where you specify criteria that will be applied when the application is run for "approval". What it boils down to is I have about 250-350 unique TSQL statements (they're basic TSQL statements, no subqueries and just 2-3 joins max) that are being assembled in C#. I need to run all these TSQL statements in SQL Server. When I feed them into SQL Server one by one it takes about 4-5 seconds to run all the requests which is not acceptable. BTW, I'm using a single connection that I'm keeping open to avoid having to create a new connection every time. I thought about running batches, where I would concatenate an entire group's worth of TSQL statements together, but I believe SQL's varchar limitation of 4000 maximum characters is going to kill that idea.

Other than multithreading, which could get a little hairy on a web server, I'm starting to run out of ideas. It just seems like 250-350 basic queries should be able to execute faster than 4-5 seconds. Unfortunately, since we're using a "rules approach" I don't believe we can create a stored procedure / parameter approach that can be precompiled. Any ideas?
 
Wow - TWO HUNDRED FIFTY to THREE HUNDRED FIFTY queries all running in 4 to 5 SECONDS. And you think that's SLOW????

-SQLBill

Posting advice: FAQ481-4875
 
IMHO no heavy iron will help much in such cases.

Either go SQL (fewer set-based statements) or C# route (buy 8 gigs of RAM [smile] and preload everything into tree/hash maps/whatever and do rules in middle-tier code). Or maybe both, dunno... more information would be nice.

------
[small]select stuff(stuff(replicate('<P> <B> ', 14), 109, 0, '<.'), 112, 0, '/')[/small]
[banghead]
 
Yeah, I think that's slow. SQL Server and this application are on a dedicated server with 2 gigs of RAM and 2 CPUs. They are all simple TSQL statements with minimal joins.

We had an old version of this engine where the bulk of the code was actually in a stored procedure that basically ran all these statements in under 2 seconds. However, we were calling every one of these statements twice and the data was a jumbled mess, so we figured there was room for improvement by moving a lot of the logic to C#.

It seems as though the problem probably isn't so much the retrieval of data, but the protocol between C# and SQL Server. If I had a way to get all those statements into SQL Server in one call (or maybe in large section), and then just return me all the results, I could get that to work. It would be even better if we could just get a more direct (read quicker) connection to SQL Server.

It's on SQL Server 2000 btw, so no CLR for us. :(

What more information would you like?
 
> It seems as though the problem probably isn't so much the retrieval of data, but the protocol between C# and SQL Server.

In that case... how 'bout a simple test? Copy these 250+ statements, run them all at once in Query Analyzer. If final response time is significantly slower (say, more than 50%) then overhead is caused outside database.

------
[small]select stuff(stuff(replicate('<P> <B> ', 14), 109, 0, '<.'), 112, 0, '/')[/small]
[banghead]
 
Er... faster, not slower [blush]

------
[small]select stuff(stuff(replicate('<P> <B> ', 14), 109, 0, '<.'), 112, 0, '/')[/small]
[banghead]
 
I ended up taking a different approach to solve this. In the engine, I was selecting all the columns I was going to be checking, and then just checking the count of the rows in C# to see if any data had been selected. With the nature of this engine, most of the result sets were empty anyway. As it turns out, selecting empty result sets takes a lot longer than selecting counts of records with the same criteria and inner joins in SQL Server. So now I'm selecting the count of records that meet the criteria, and only if there are any records meeting that criteria, then I go and select those records with a second statement. Believe it or not, I've cut the entire process down to an average of 1.5 seconds. Not too shabby. ;)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top