Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

table variable & optimize for unknown

Status
Not open for further replies.

VBmim

Programmer
Jun 25, 2001
361
BE
Hello,

In my application, I have a query which takes a large number of items in a 'in' clause:

select [columns]
from
where
.[id] in (1,2,3,4,.........,1000).


These queries are run through a .net application which uses ado and a native sql connection. The number of items in the 'in' list change depending on the user that uses the application. It can be a 1000 items, it could only be 1 or 2...

I am in the progress of changing such large 'in' lists to table variables, this to use parametrized queries and to restrict the amount of query cache used by our database servers. The query now looks like this and works:

select [columns]
from
where
.[id] in (select id from [@table_var])


However, a sql server specialist gave me the tip that I should use 'option OPTIMIZE FOR UNKNOWN' to prevent bad query plan caching if the first occurrence of the query only has one items and all subsequent queries have 1000.
i have been trying to use the optimize hint only for the table variable with 'option (OPTIMIZE FOR (@table_var UNKNOWN))' , but I get an error "Must declare the scalar variable @table_var". I can use the optimize hint for other (non table) variables just as fine. I can use the general 'option (OPTIMIZE FOR UNKNOWN)' without errors, but I don't know if my table variable is also covered, since the previous syntax failed.

Can the optimizer handle different sizes of the table variable within a same query plan another way?

Thanks in advance for the responses,

Kind greetings

SqlMim
 
[tt]select [columns]
from
where
.[id] in (select id from [@table_var])[/tt]

I would like to suggest an alternative query, one that will likely perform better.

Code:
select [columns]
from   [table] 
       Inner Join @table_var As Table_Var
         On [table].[id] = Table_Var.Id

Regardless of the query, the table variable will always be scanned because there is no WHERE clause on it. However, the
that you are joining to could be a seek or a scan depending on your statistics and the number of matching values. If the table is small, it likely won't matter. If the table is large (millions of rows), it likely will matter.

Optimize For Unknown should only be used when the query optimizer is using an inefficient query plan. Do you know this is happening? It's possible that you are trying to optimize something that doesn't need to be optimized. Are there performance problems running the query?

Changing your query as I suggested may resolve the performance problems for you. Please give it a try.

-George
Microsoft SQL Server MVP
My Blogs SQLCop twitter
"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom
 
One extra possibility in addition to suggested by George is to select Items from the @Table_var into #TempTable first and create an index in this table.

But all of this will make sense only if you find performance to suffer.

PluralSight Learning Library
 
Hello,

Thanks for your responses.

@gmmastros
So if I understand you correctly, the join will perform better when the
has millions of rows while the [table_variable] only has 1000. And it shows only a small difference when the rows are more or less equal?

@markros
My table variable only has 1 column on it, and I could make a PK on it (which will create an index), but since every row in the table variable is needed I do not see the need to. Am I correct?

Kind regards,

SqlMim
 
So, when you define a type you define this field to be a PK?

I know that for many items temp tables work better than table variables. However, most likely you don't need anything special.

PluralSight Learning Library
 
If you put a primary key on the table variable, I encourage you to insert the data already sorted on the PK column. This will be more efficient.

Are you experiencing performance problems? If so, can you give us some number? Like... when there's 2 rows in the table var, it's instant. with 1,000 rows it takes 12 seconds.

-George
Microsoft SQL Server MVP
My Blogs
SQLCop
twitter
"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom
 
Hello,

I am following-up on this thread.

I have done some testing with my queries and here are my results
* the inner join clause performed a lot better than the in clause
* adding the hint 'OPTIMIZE FOR UNKNOWN' helps to improve performance when the number of items change in time. If I do not include this hint and run the query for 2 items, and then for 1000, the latter takes 20 seconds. With the hint it only lasts for 3 seconds for both queries.
* it is not possible to specify a table variable in the OPTIMIZE FOR UNKNOWN hint, because a table variable doesn't generate statistics.
* In my case, creating the table variable with one field only and as a PK did not help performance. It gave me query plans I can not explain, since I am happy with the performance without the PK I leave it this way.

The primary reason why we start to use table variables is that the query cache on our servers is growing too big. Every user were giving their own list of items, which was resulting in big duplicate plans in the query cache.
It can only be positive if we can achieve an overall performance gain as well.

Kind greetings,

DbMim
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top