I am trying to move from a stored procedure which returns a single row and is called many times from the client side, to instead take the data from the client and perform all of the work server-side instead.
To call the stored procedure many times from the client, getting one row back, for our largest possible dataset takes roughly 1 minute to complete.
I thought for sure moving that over to a stored procedure on the database server and handling it in there would make it faster. But the largest dataset takes exactly 1 minute to complete.
It is probably still faster than with the client side solution - but not nearly as fast as I thought it would be.
The server hardware is not the issue. We have a quad processor (well, dual HT Xeon P4s, which is the OS sees as quad CPU), 4GB RAM, and multi high speed SCSI HDs - I have watched the load on the machine during this query and the server just yawns. The load on any processor never goes over 25% and the memory used is only about 400MB (and doesn't go up or down due to the query).
The dataset returned is only about 500 rows. The largest table we look at has fewer than 12K rows in it. So as far as I am concerned, this is a tiny database.
The query I need to run pulls data from a 5 database tables (well, one is a view) and 1 table variable that is created earlier in the stored procedure.
The table variable is created very quickly - I have tested that part by commenting out everything after it and then tracking how long that takes (less than one second).
So the bottleneck is all in this query here:
I have changed the name of the tables and fields so that it is abstracted from what our data is (legal contract issue) - but the query is still the same.
Could anyone please suggest how to increase performance on this?
I am aware that I am using the "old" Microsoft way of doing joins - but I am not SQL savvy enough to know how to do a 5 table join the ANSI way - so if you think that would make a difference, then I am all ears (but will need some assistance as to how that is formed).
Also, do you think it would be possible that subqueries would be faster? If so could you please give an example of how that might look?
If you want a description of how any given table looks in terms of how many columns, or how many rows it is likely to have, I can give more detail.
Thanks for any help - I'm already losing my hair, no need to pull it out as well
To call the stored procedure many times from the client, getting one row back, for our largest possible dataset takes roughly 1 minute to complete.
I thought for sure moving that over to a stored procedure on the database server and handling it in there would make it faster. But the largest dataset takes exactly 1 minute to complete.
It is probably still faster than with the client side solution - but not nearly as fast as I thought it would be.
The server hardware is not the issue. We have a quad processor (well, dual HT Xeon P4s, which is the OS sees as quad CPU), 4GB RAM, and multi high speed SCSI HDs - I have watched the load on the machine during this query and the server just yawns. The load on any processor never goes over 25% and the memory used is only about 400MB (and doesn't go up or down due to the query).
The dataset returned is only about 500 rows. The largest table we look at has fewer than 12K rows in it. So as far as I am concerned, this is a tiny database.
The query I need to run pulls data from a 5 database tables (well, one is a view) and 1 table variable that is created earlier in the stored procedure.
The table variable is created very quickly - I have tested that part by commenting out everything after it and then tracking how long that takes (less than one second).
So the bottleneck is all in this query here:
Code:
SELECT
au.UserID UserID,
au.CompanyID CompanyID,
u.long_name UserName,
sc.description FLongName,
cd.eq_type EQ_Type,
cd.s_decimals SDecimals,
cd.p_decimals PDecimals,
dbo.fn_round(
sum(TSR.TSR_NUMBER),
cd.S_DECIMALS,
cd.s_rounding)
TSRCount,
cd.s_rounding SRounding
FROM
@AllUsers au,
Users u,
v_eq_attr eaid,
sub_company sc,
company_definitions cd,
t_s_r tsr
WHERE
u.user_id = au.UserID
AND
eaid.company_id = au.CompanyID
AND
sc.sc_id = eaid.sc_id
AND
cd.company_id = au.CompanyID
AND
tsr.company_id = au.CompanyID
AND
tsr.sc_id = eaid.sc_id
AND
tsr.user_id = au.UserID
AND
tsr.s_date <= @DateTo
AND
tsr.s_date >= '1/1/1990'
AND
tsr.Status='A'
GROUP BY
au.UserID,
u.long_name,
cd.eq_type,
sc.description,
cd.s_decimals,
cd.p_decimals,
ai.CompanyID,
cd.s_rounding
HAVING
dbo.fn_round(sum(TSR.TSR_NUMBER),cd.S_DECIMALS,cd.s_rounding) <> 0.0
I have changed the name of the tables and fields so that it is abstracted from what our data is (legal contract issue) - but the query is still the same.
Could anyone please suggest how to increase performance on this?
I am aware that I am using the "old" Microsoft way of doing joins - but I am not SQL savvy enough to know how to do a 5 table join the ANSI way - so if you think that would make a difference, then I am all ears (but will need some assistance as to how that is formed).
Also, do you think it would be possible that subqueries would be faster? If so could you please give an example of how that might look?
If you want a description of how any given table looks in terms of how many columns, or how many rows it is likely to have, I can give more detail.
Thanks for any help - I'm already losing my hair, no need to pull it out as well