Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Using special table "deleted" in dynamic SQL

Status
Not open for further replies.

mayaugust

Programmer
Nov 14, 2005
8
US
Hello All,

I'm new to SQL programming and was looking for some help ...

There is an update trigger on a table which does

SELECT * INTO #DELETED FROM DELETED
(so that the old value is captured)initially and later at some point in the logic, the name of the updated column is determined and stored in the variable @UPDTCOLNAME (There are more than 40 columns in the table and we need to figure out what exact column was updated for the row).

The following statements are then executed to determine the old value of the updated column.

SET @SQLQ = N'SELECT @OLD_VALUE = '+@UPDTCOLNAME+' FROM #DELETED WHERE SID = '''+@DEL_SID+''' '

EXEC SP_EXECUTESQL @SQLQ, N'@OLD_VALUE VARCHAR(2000) OUT',@OLD_VALUE OUT


Is there a way I can achieve the same by not using the temporary table #DELETED but actually somehow use the special table DELETED directly. The reason I need this option is because when I send huge updates, lets say 100K records the SELECT * INTO #DELETED FROM DELETED runs into minutes and slows down the processing.

Thanks for any suggestions.

Pfe
 
You can compare a field to see if it has actually had its value modified by doing the following

if (inserted.FieldName <> deleted.FieldName)

Not sure how you are doing the check now for a field being updated, but be weary of the if updated command, as the command always returns true if the field is on the left of the assignment clause, even if it is being set to the existing value. Many front ends simply grab all of the fields on the screen and do an update. Just a precaution.

In general you could do the exact same selection you are doing now directly against the DELETED table, no need to insert the records to the #temp table. (Depending on all of the other things going on you could consider using a TABLE variable, however, with 100K rows, that is probably not a good idea.)

I'm not sure looking at your code how you handle the fact that multiple columns could have been updated for a particular row.

If you could tell me (others) exactly what it is that you are trying to do (why you need to know which field changed, what you plan to do with it once you do know, what you will use the old/new values of the field for) I might be able to suggest something.
 
Druer,

There is a meta directory which subscribes to data from SQL (which has a dump od Peoplesoft data). The software piece (here by call it connector) which actually bridges SQL data and meta directory data needs to know which columns got updated for a record so that it does not spend time figuring that out or else in the worst case blindly compare the data of more than 60 columns for each row between meta directory and SQL. (This can be an overkill)

For instance when an update happens on a particular record ,there might be only phone number that gets updated so I need to explicitly tell the connector to update the telephone attribuet for the object in the meta directory by removing the old value and adding the new value.

So there is code in the trigger (uses the system tables) to figure out exactly which column changed on a particular row (this is not an issue). Now I need to get the old and new value of the updated column "only", from the deleted and inserted tables. Since the name (say tele_no) of the column that got updated is stored in a variable I cannot use that directly in the select query on either inserted or deleted. Hence I need to use DSQL to construct the SQL string to be executed. But if execute the SQL statement then the deleted and inserted table will not be available for the execute statement!

This is why I was trying to dump the data into a #temp table and then refer that table in the DSQL statement (which will now be available for the execute command).

Thanks
 
Counter-question: what if two columns get modified at once? Or even worse, two rows?

------
"There's a man... He's bald and wears a short-sleeved shirt, and somehow he's very important to me. I think his name is Homer."
(Jack O'Neill, Stargate)
[banghead]
 
There are cursors defined which will handle the case where multiple rows get updated.

pfe
 
Problem is, there is no (easy) way to determine names of updated columns based on insert/deleted contents.

UPDATE() check requires column name in advance.
COLUMNS_UPDATED() sucks plain and simple - for various reasons.

Both methods return "global update" state in case of multirow updates.
Both methods return true in case of "dummy" updates (e.g. UPDATE table SET column=column).

So... can you join inserted and deleted on primary key, use cursor over both tables and load all column values into @variables during every FETCH? This could give all necessary information to compare old vs new value without D-SQL.

------
"There's a man... He's bald and wears a short-sleeved shirt, and somehow he's very important to me. I think his name is Homer."
(Jack O'Neill, Stargate)
[banghead]
 
I see what you are saying but imagine the managebility and readbility of the code when the table can contain around 90 columns :(

pfe
 
I see what you are saying but imagine the managebility and readbility of the code when the table can contain around 90 columns

This isn't meant to be rude or condescending but thats why databases are normalised and why it can be highly innapropriate to build tables with that number of columns. Its unfortunate but perhaps a problem brough about through innapropriate database design or purely because of the required system functionality that may now just be unavoidable without changing the database structure.

Rhys
The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense Edsgar Dijkstra

Church of the Flying Spaghetti Monster
 
Well if have you have dealt with a Peoplesoft team within a company asking them to provide data to a subscriber application you will know why at times you have to get stuck with this inefficient database design.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top