Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Performance and limits of SQL Server with SAS 1

Status
Not open for further replies.

Guest
I am looking at the setup of a new SQL Server database with SAS as the main access tool. The database structure will be quite simple, probably 20 to 30 tables but there will be 2 big tables, one that will have low growth and be about 100 million rows and the other growing fairly rapidly to 1,000 million rows. Does anyone have any experience of SQL Server and SAS on this size of database and have knowledge of what any issues might be.

I'd appreciate any views

Thanks

Mike
 
I've had SQL Servers will tables of a billion+ records with no problems. 100 million records will definetly be no problems. Make sure that if you plan on quering these tables you have your indexes setup correctly, and that your disk sub-system is VERY fast. Also if you plan on quering these tables on a regular basis, be sure to make sure that your not going to have what ever is writting into the tables blocked while it's writting.

IE, if a web site is what's doing the writting, have it written somewhere else first, so that if the table is blocked by a select statement, the web site doesn't lock up.

I'm not farmiller with SAS, but from the SQL Side, that should cover you.

Denny

--Anything is possible. All it takes is a little research. (Me)
 
Thanks for this, gives me confidence! Have you any guidelines on what a very fast disk sub-system would be? I want to check that the specified machine will cope!

Also if anyone out there has SAS experience on sql tables this big i'd appreciate hearing from them...

Thanks again

mike
 
From just a mechanical perspective, it is helpful to have indexes and transaction logs on separate drives with separate drive controllers. This can speed up processing becasue it can be accessing a record while it is writing to the transaction log which it can't do if they are all on one drive.

WIth a database that large, I would invest in a book on performance tuning. You need to learn to write the most effcient possible code.

Questions about posting. See faq183-874
 
My ideal drive config for systems with large tables is this:
(Each array on it's own controller, or at least on it's own channel).
RAID 5 Array for Data
RAID 5 Array for Indexes
RAID 10 Array for Transaction Log
If possible another RAID 5 array for tempdb (this is nice, but not neccessary).

For each array you want the most physical spindles possible for faster reads and writes. The more spindles the faster the system should be able to find the data your are looking for. This should all be SCSI3 or Fibre with hardware RAID.

Denny

--Anything is possible. All it takes is a little research. (Me)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top