Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

DB Setup Question

Status
Not open for further replies.

umeman

Technical User
Oct 2, 2001
59
US
Hi

I am setting up a MS SQL 2000 database that will reside on SAN
the eventual size of the database will be 200 GB
I need some advice on the DB create


Which of the foll is advisable:

a) Create an initial data file that is maybe 50 GB and
let it grow by 10%
b)Create a 200 GB data file with no growth
c) Create an 4 GB database that will house the initial
few months of data and let it grow by 20%
d) Should I create multiple data files

What data file considerations should I take into account
for the data files/log files knowing that this is for a data warehouse


What should the size and suggested parameters be for the log file, tempdb

I also estimated the largest table to be about 20GB in size

Thanks
Umesh
 
First question. Where is the data coming from? Will it be entered in on a daily/monthly/whatever basis? Or will it be entered all at once

My opinions:
If daily/weekly/monthly/whatever, guestimate how much data will be put in on that reoccuring basis. Build your database to be at least double the size of one entry amount.

If it's all going in at once, build it to a bit more than the max size you expect it to be.

So, basically it looks like C might be the best solution for you. Except I never build a 'growth' to be percentage. Do you expect 4 GB every month? Create the database to be 8 GB and set it to autogrow 4 GB.

I suggest putting the Log files and Tempdb on separate disks. Log files and Tempdb files get lots of writes. I've seen suggestions that they each be put on their own RAID 1 set of disks.


-SQLBill

Posting advice: FAQ481-4875
 
Here is an FAQ on setting the disks up faq962-5747.

I agree with SQLBill that data files should never be setup to grow by a percentage.

To answer the question about multible file groups we'd need to know much more about the database, probably more than can be posted in the forums. With a database that is expected to grow that large, you'll probably want to put the non-clustered indexes into there own file group.

You'll also need to know how the LUNs on the SAN will be setup. What RAID level will they be on? Will the LUNs be on the same physical disks? Will anything else be on these disks? If anything else will be on the disks, what kind of load is already on these disks? How much capacity is available on the fiber in and out of the SAN?

Knowing how the LUNs are setup will start you in the correct direction on how your files should be laid out. In additional you need to know how many users will be using the system, how much they will be quering the large table(s). Will they be using indexes or table scans?

Is this going to be a DSS database or an OLTP database?

Once these are all answered we can probably give a more precise answer on how the drives should be setup.

Denny
MCSA (2003) / MCDBA (SQL 2000)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Hi ,
Thank you for your responses but I am not looking at configuring the SAN, we need to assume that the SAN
has been configured in an optimized manner.

Data, log and tempdb would be on the same SAN. This is a
client environment and I have no way to change that.

What I am looking for is how have others created their databases that were 50 to 100 gb in size, setting the datafile extend by size rather than % is a good tip.


Are there any issues with creating an initial datafile
that is 30Gb or 40Gb ?
Is it ok to have just 1 data file and allow that data file to grow to 200 gb ?

What shoud the log file size be and how should I configure
its growth or truncation ?

What should I configure the TEMP db size to be ?

Thanks

 
There's no problem setting the initial file size fo 30-40 GB. That will give you a nice contiguas file on the disks.

Log file should be set to 2-3 times the size needed to hold the logs for a single backup window.

For example if your system will generate 5 Gigs of logs per hour, and you are backing up the logs every hour then 10-15 Gigs should work nicly.

Growth should be setup so that it doesn't need to grow often, but it shouldn't grow so large that queries will time out waiting for the log to finish growing on the disk.

TempDB's size will be based on how much data you will be pushing into it. Will temp tables be used a lot? If so then it should be larger. If tempdb isn't going to be used much then leave it small.

I wouldn't assume that the LUNs for this server are setup correctly. I assumed that everything would be on the same SAN. You'll want to have them tell you how the disks are setup on the back end. If the spindles that they are giving you are maxed out you will see a huge performance problem, but you'll never be able to narrow down where the problem came from.

Denny
MCSA (2003) / MCDBA (SQL 2000)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top