Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations John Tel on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Insert Statement Slows Down/Fails. Please Help!!!

Status
Not open for further replies.

txmed

Programmer
Mar 23, 2006
65
US
Greetings. I've been having a problem with one of our customer's database and its driving me insane. Its a pretty lengthy problem so please bare with me.

A few years ago I wrote an application that stores client-visit information into an SQL Server 2000 database. Here is a copy of the design of the table, which is called "tblVisits":

Code:
[COLOR=red]Column Name             Data Type        Length[/color]
VisitID                 numeric          9 (PKey)
PrimaryVisitID          numeric          9
ClientID                varchar          20
VisitDate               datetime         8
VisitTimeIn             datetime         8
VisitTimeOut            datetime         8
UserName                varchar          20
DepartmentID            int              4
Notes1                  nvarchar         4000
Notes2                  nvarchar         4000
BusinessID              int              4
AssignedToUser          bit              1

*NOTE* I'm sure a few people are going to mention that because of the lengths of the Notes1 and Notes2 fields, that a problem might occur. Please keep in mind that this application has been in production for a 3+ years and has never been a problem.



Recently, we wrote an upgrade to this application and gave it to this customer to test. We've made some additional changes to the "tblVisits" table. Here is what the current table looks like now:

Code:
[COLOR=red]Column Name             Data Type        Length[/color]
VisitID                 numeric          9 (PKey)
PrimaryVisitID          numeric          9
ClientID                varchar          20
VisitDate               datetime         8
VisitTimeIn             datetime         8
VisitTimeOut            datetime         8
UserName                varchar          20
DepartmentID            int              4
Notes1                  nvarchar         4000
Notes2                  nvarchar         4000
BusinessID              int              4
AssignedToUser          bit              1
[COLOR=blue]
RoleID                  int              4
EncryptionType          smallint         2
IsNewVisit              bit              1
IsDeleted               bit              1
[/color]


Now, when the application attempts to write to this table, the INSERT statement slows down dramatically. Before the INSERT statement would take less than a second to run. Now, it takes 45+ seconds and sometimes times out.

Does anyone know why this might have happened? Could this problem be because of the changes we made to this table? The application does not slow down on any of the other tables, just this one.



Here's some additional information on this problem:

1) SQL Server Information:
- SQL Server 2000
- All service packs have been applied.
- Our database is the only database on this server.


2) Index:
The table is only indexed by its primary key, the VisitID field.


3) Table Row Size:
I have a user-defined function that returns the number of bytes left to use if I wanted to create more rows. When I run this function against this table, I get -30.


4) Largest Row Length:
I also wrote a query that uses the DATALENGTH() function to get the total size of data for each record. The record with the largest row length was 8,013.


5) Bulk Copy Program:
Since I was not able to duplicate this problem on my development machine, I asked my customer if he could get me a copy of the table so that I can then import it into my development machine for testing. We used the Bulk Copy Program (BCP) utility to export his data to a .bcp file and then he sent it to me. I then used the BCP utility to import his data to my test database. After I did this, I was still not able to duplicate his problem.


(txmed)
 
did you look at the following:
the execution plan
fragmentation levels
if statistics are out of date



>>Please keep in mind that this application has been in production for a 3+ years and has never been a problem.


right, things happen my friend especially if it is a leap year....2008 check

>>Now, it takes 45+ seconds and sometimes times out.

run a trace capture the insert statement and run the insert statement from query analyzer (a time out almost never occurs in query analyzer, this way you can also look at the plan)

Denis The SQL Menace
--------------------
SQL Server Code,Tips and Tricks, Performance Tuning
SQLBlog.com, Google Interview Questions
 
I recommend you run a DBCC IndexDefrag. Lookup the correct syntax in Books On Line.

Also #3. Your UDF. I suspect it's wrong. When you use an [!]n[/!]varchar data type, it actually uses 2 bytes per character to store the unicode data. One nvarchar(4000) column takes up most of the allowed space. You could be losing data and not even know it. Now, I'm not suggesting that you immediately correct this problem, but I would feel pretty bad if I didn't at least mention it.
 
Has the customer updated statistics since the table change? Also has the table been reindexed? These two actions could help.

Personally I would consider a redesign where notes are in a separate table. This way you can have as many notes as you need without running into space issues or having to change the table structure when you run out of space (which may be soon) in your two fields to add Notes3. Or you could change them to ntext which stores the data differently but is harder to work with.

"NOTHING is more important in a database than integrity." ESquared
 
Now Dennis, sometimes that is the best option however much we try to avoid it, especially in 2000. Besides misery loves company (not that we have a lot of ntext to deal with but well you know why I'm not so cheery these days.)

"NOTHING is more important in a database than integrity." ESquared
 
Okay, now I'm really at a loss. First of all, I would like to thank SQLDennis, SQLSister and gmmastros for their prompt replies. You all gave excellent responses and I will be sure to look into them.


Since my original posting, I've worked with my customer to see if we can get this problem fixed. Here is what we did:

1) First, I generated a script of the "tblVisits" table and created a new table called "tblVisits2". I then proceeded to turn off this new table's Identity property for the time being.

2) Using the .bcp file we generated yesterday, we then imported the "tblVisits" data into the newly created "tblVisits2" table. Everything went in fine.

3) We then ran some test on the "tblVisits2" table and everything succeeded. The queries ran like normal, ran the DBCC SHOWCONTIG() and everything appeared to be normal.

4) Once we were satisified that the "tblVisits2" table was acting correctly, we killed all of the current user connections to the server, and then renamed the "tblVisits" and "tblVisits2" tables. We changed "tblVisits" to "tblVisits_Old" and changed "tblVisits2" over to "tblVisits".

5) Once we changed the table names, we ran some tests and the same problem occurs. When attempting to write the new "tblVisits" table, the transaction hangs for 45+ seconds.



This is my question: could it be possible that the name of the table has been corrupted in SQL Server? Logically, I don't see how the table works fine when named "tblVisits2", but when we rename it to "tblVisits", the problem occurs.


Here's another intersting thing: yesterday, while we were trying to fix this problem, I created a copy of the "tblVisits" table, calling it "tblVisits2". The table had the same primary key as the original table. When we were not able to fix the problem, we deleted "tblVisits2" using the Enterprise Manager (right-clicked and said Delete).

Now, when I re-created the "tblVisits2" table again today, I attempted to create the primary key for this table. SQL Server gave me an error message saying that the "constraint PK_tblVisits2 already existed." How could this be? Shouldn't this constraint have been deleted when we deleted the "tblVisits2" table? If this is happening, couldn't it be possible that SQL Server has somehow corrupted the name of the "tblVisits" object, or some other object that is referencing it?


(txmed)
 
Are you sure you didn't rename a table at some point? The constraints in the table don't change with it.

sp_help 'tablename'
sp_helpconstraint 'tablename'
 
ESquared,

Thank you for your response. I don't think we renamed it. To be honest, we've tried so many different things these past few days, all of it has sort of blended together.

I'll look into the sp_help and sp_helpconstraint procedures. Do you have any other ideas why this might be happening?

(txmed)
 
Good news! I was finally able to replicate and fix this problem on my development machine!!! I'm going to attempt to explain this as best I can.

I was able to setup this problem on my development machine by performing these steps:

1) Create a new copy of the "tblVisits" table.
2) I then used the BCP utility to import your data into this table.
3) Made sure the Notes1 and Notes2 fields have a length of 4000. During my testing, I had this set to 3,900.



Now here's the strange part:

I created a special procedure in app that would do a simple insert into the new "tblVisits" table (instead of having to go into the normal steps of creating a record). Nothing special; just a simple INSERT statement. The procedure would write to the table and then MessageBox to me the primay key value of the newly created record. I noticed that I could run this procedure as many times as I wanted to and view the data in Query Analyzer. However, the moment I viewed the data in Enterprise Manager (right-clicking tblVisits then Open Table > Return All Rows) the procedure would fail. When I closed the table in Enterprise Manager and ran the procedure again the records were inserted fine. Very weird!?!



This is what I'm thinking:

There are some records in the database that are very close to their maximum size limit. From my readings on the web, records in SQL Server not only take up bytes for their data, but it also takes up bytes for the pager header (96 bytes) and row offset array. I found this webpage very helpful: for more info.

I'm thinking that when viewing the data in Query Analyzer, only the results for the data are returned. But when the data is viewed in Enterprise Manager, it's getting the additional pager headers and row offset arrays. I think that is what's messing up SQL Server and causing my INSERT statement to fail.



So how do I fix it?

Now that I was able to reproduce the error, I decided to see what would happen if I decreased the size of the Notes1 and Notes2 columns. When you lower the length of columns in SQL Server, it will (at least in Enterprise Manager) automatically truncate the data to make sure it fits in the column's new size. I lowered the size of the notes fields, and then ran the app again. The app wrote to this table with no problems.

The problem now is that I had to lower the size of the columns from 4000 to somewhere around 500-1000. That's bad in terms of all the data that it's truncating (losing). However, with my new version I built the new "tblVisitNotes" table, which normalizes the data even more. I think I can migrate the notes fields over to the new "tblVisitNotes" table, then decrease the size of the notes fields within the "tblVisits" table. This will free up any of that excess memory and fix this problem permanently.


Thank you again to everyone that helped me with this problem. Maybe my findings here will save somebody a lot of headache and troubles in the future.

(txmed)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top