Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Table too wide. 1

Status
Not open for further replies.

oracleSQLdba

IS-IT--Management
Nov 2, 2006
53
US
We're getting the "exceeds the maximum numberof bytes per row (8060)." error in sql 2000.

Does anyone have any kb articles , whitepapers, something else from Microsoft which acknowledge the problem AND indicate it's ok to proceed?

The one article I've found from MSFT simply notes that inserts can fail.

I need a bit more than that for assurance for our environment.
 
You can create a table that has more than 8060 bytes per row, but you MAY lose data when updating or inserting data in to the table.

My best advice is to create another table that has a 1 to 1 relationship and put some of your fields in to this other table. Then, when you need to get the data that exists in the other table, simply join to that table to get the data.



-George

Strong and bitter words indicate a weak cause. - Fortune cookie wisdom
 
George,

Do you have any whitepapers, etc. for this? I'm working on a federally regulated server and I need something, preferably from Microsoft, which details what you've noted.
 
monksnake, the link you provided doesn't offer any position from Microsoft on whether this is supported or whether, as gmmastros states, some updates may corrupt the data.

The article states inserts and updates will fail.
 
oracleSQLdba,

I don't know if such a white paper exists, but it is easy enough to perform a quick test.

If you are unaware of the replicate function.... You pass a string and a number and the string is replicated that number of times. To see how this works, copy/paste this to a query analyzer window.

Select Replicate('A', 4)

The output will be: AAAA

Now, take a look at the following code. In it, I create a table (named WideTable) that has 3 varchar(4000) columns. When you create this table, you will get a warning, but the table will create. If you insert data in to the table that takes less than 8060 bytes to store, then everything will react as expected. However, if you attempt to add a record that exceeds 8060 bytes, you will get an error and the data will NOT be inserted. Likewise, if you update the table and the new data is less than 8060 bytes, everything will run as expected. Once you exceed the 8060 bytes, the update will fail and the data will NOT be updated.

You should copy/paste the code I show below in to a Query Analyzer window. Highlight each row seperately and press F5 (to run the query).

Code:
[COLOR=blue]Create[/color] [COLOR=blue]Table[/color] WideTable(Col1 [COLOR=blue]VarChar[/color](4000), Col2 [COLOR=blue]VarChar[/color](4000), Col3 [COLOR=blue]varchar[/color](4000))

[COLOR=blue]Insert[/color] [COLOR=blue]Into[/color] WideTable [COLOR=blue]Values[/color]([COLOR=red]'Fred Flintsone'[/color], [COLOR=red]'Barney Rubble'[/color], [COLOR=red]'Wilma Flintstone'[/color])

[COLOR=blue]Select[/color] * [COLOR=blue]From[/color] WideTable

[COLOR=blue]Insert[/color] [COLOR=blue]into[/color] WideTable [COLOR=blue]Values[/color]([COLOR=#FF00FF]Replicate[/color]([COLOR=red]'a'[/color], 4000), [COLOR=#FF00FF]Replicate[/color]([COLOR=red]'b'[/color], 4000), [COLOR=#FF00FF]Replicate[/color]([COLOR=red]'c'[/color], 4000))

[COLOR=blue]Select[/color] * [COLOR=blue]From[/color] WideTable

[COLOR=blue]Update[/color] WideTable [COLOR=blue]set[/color] Col1 = [COLOR=#FF00FF]Replicate[/color]([COLOR=red]'a'[/color], 4000)

[COLOR=blue]Select[/color] * [COLOR=blue]From[/color] WideTable

[COLOR=blue]Update[/color] WideTable [COLOR=blue]set[/color] Col2 = [COLOR=#FF00FF]Replicate[/color]([COLOR=red]'b'[/color], 4000)

[COLOR=blue]Select[/color] * [COLOR=blue]From[/color] WideTable

[COLOR=blue]Update[/color] WideTable [COLOR=blue]set[/color] Col3 = [COLOR=#FF00FF]Replicate[/color]([COLOR=red]'c'[/color], 4000)

[COLOR=blue]Select[/color] * [COLOR=blue]From[/color] WideTable

[COLOR=blue]Drop[/color] [COLOR=blue]Table[/color] WideTable

I think it is pretty obvious from this test that having a table that is too wide is a time bomb waiting to go off. The problem is that the table will function properly UNTIL the data width issue causes a problem. This could be a VERY difficult bug to find later.

-George

Strong and bitter words indicate a weak cause. - Fortune cookie wisdom
 
George, thanks for the demo. However, I need proof from MSFT given the federal regulations around the type of server.

 
In that case, you may be better served by calling microsoft. When you do, can you please pass on whatever information they provide.



-George

Strong and bitter words indicate a weak cause. - Fortune cookie wisdom
 
This link is probably a good starting point.

Microsoft Support

I know your looking for documentation because this is a regulated database but it seems you should be looking to the company that produced the database as to why the DB was designed in such a way that it has potntial conflicts with the underlying engine, and what the impact on data integrity is going to be.

Microsoft will probably point you to the documentation statig the reason for this is that if the Data were to exceed 8060 then it can not be stored on a single data page as this is the size of the data page. Then try and convince you to upgrade to SQL 2005 which has mechanisms to handle these issues.

Paul
---------------------------------------
Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
MDXer, you're assuming I haven't asked the vendor. That would be incorrect.

My current task is to guarantee the data is safe. Pointing fingers is another discussion.
 
>> My current task is to guarantee the data is safe.

The test I showed above proves that your data is NOT safe. By calling Microsoft, they can probably point you to a white paper that essentially explains the same thing that my test shows.

-George

Strong and bitter words indicate a weak cause. - Fortune cookie wisdom
 
George, thanks for being very thorough. MSFT has 1 KB article which says that updates and inserts will fail as you've nicely demonstrated. What they don't have is any document which states the data will remain safe (think back to a federal mandate for this data) and of course, to make matters more fun this is a merge and transactional replicated table.

Cheers, but I think the answer is that we have no proof the existing data is safe or not safe.
 
Are you kidding? How can you even think that data *might* be safe, when you have seen that updates can (and therefore inevitably will) fail?

I suppose it would be safe in an environment where nothing will ever need to be inserted, and no records will ever be updated. I think it would be worth splitting your table in 2 in order to ensure that it IS safe, but maybe that is just me.

You have not really given much in the way of information about how this database is used, except that the sound advice given by the company that produces and sells SQL Server and seconded by many of your fellow SQL Server professionals is not enough proof for you. Perhaps, if you provide a description of what your application and DB are currently doing, someone will be able to point you towards a solution that would not involve splitting your table (although I imagine such a solution would involve either changing your maximum column widths or upgrading to SQL 2005)

I hope this helps,

Alex

Ignorance of certain subjects is a great part of wisdom
 
They do in every manual and book I've ever read by them

Example of one: SQL Server 2005 Implementation and Maintenance. Page 151

MS Press said:
a data page in SQL Server is 8,192 bytes in size and can store up to 8,060 bytes of actual user data.



[sub]____________ signature below ______________
The worst mistake you'll ever make is to do something simply the way you know how while ignoring the way it should be done[/sub]
 
did some posts here get flagged? It seemed like we were actually getting to a point where the issues and goals were understood.

Paul
---------------------------------------
Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
hmm. I wonder why my last post got deleted? I didn't say anything bad?

- Paul [batman]
- If at first you don't succeed, find out if the loser gets anything.
 
Good and bad I'm sure it was to keep it all on the right track and not go down [flame] that road.

Paul I'm sure it was a mistake.

Now, get back to work. All of you! ;)

[sub]____________ signature below ______________
The worst mistake you'll ever make is to do something simply the way you know how while ignoring the way it should be done[/sub]
 
The bigger question is why do you have such wide tables? It looks like this isn't normalized. You know that performance will be horrible right? To display 1 row you have to read 8K. That is going to be a lot of IO and potentially blocking if you have a lot of concurrent users

Split out the lengthy columns and dump them into another table.

If you don't want to break any procs rename the 2 tables and make a view that has the old table name so that you don't have to recode your procs

Just an idea

Denis The SQL Menace
--------------------
SQL Server Code,Tips and Tricks, Performance Tuning
Google Interview Questions





 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top