Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Web server requirements

Status
Not open for further replies.

GoldenEye4ever

Programmer
May 26, 2007
20
CA
A friend an myself are developing a software solution in the form of a web application (JSP) and using PostgreSQL as the database solution.

I don't have a lot of experience with server hardware and don't have a great understanding of what hardware required for specific load expectations.


We will be performing a demo of our software in October and would ideally be implementing this in the next 6-8 months.


I'm a little torn on what hardware configuration to use.
We're planning to install Linux Debian (x64) on the final server and use Tomcat 6 along with PostgreSQL.

We expect to be able to support up-to 1000 consecutive users (primarily data-reads) at any given time, but realistic use will be under 200 consecutive users.

The web application has been written using JSP and is running on Tomcat 6.

We expect that the DB will contain several (2-3) tables containing somewhere near 10 million records and history tables for each of those which will likely get very large over time.
(sorry I don't have sizes in GB yet)


What would be a good hardware setup for this system?
Would it make enough sense, justifying the cost, to put the DB on its own machine?
Also, I understand the SCSI is considered to be far superior to SATA, but is it really worth the extra money, or is it a better option to raid a few SATA drives.


------------------------------------------------

I'm currently developing everything on my personal PC:
- 2.6 GHz quad-core
- 4 GB DDR2 Ram
- 500 GB HDD + 1TB HDD
 
Here is my opinion:

Whether or not to have the application and database on separate machines depends on several factors. Here are just a couple: will the application heavily use the same resources as the database (for example, will it have high CPU and disk I/O)? If the application will tend to be more heavily network I/O and the database heavy on disk + CPU, then it might makes sense to have them on the same physical box so as to maximize your resource utilization. You can always move the database later. However, if you know right off the bat that the app is going to have high disk I/O, you might want to think about separating the two.

Another factor is whether you plan to scale out the application to multiple servers connecting to a common database or not. But again, you can always move the database later (unless you've hard coded your app to connect to a local database, then it might be a bit more difficult).

As for what server to get and what kind of drives to get, I recommend that for a production server that will have any sort of load or where performance matters at all, use mirrored SATA drives to have your system partitions (/boot, /, swap, etc). Data files should be on SAS drives in a RAID 5 or RAID 50 array (RAID 50 if you can afford the extra drives--you'll need a minimum of 6).

I would also recommend having the application files on a separate partition than your database files.
 
Thanks a lot for your very fast response.

You gave me some good points to think about.
The way I wrote my DB accessors is that I created a common connection class that is used by all code in my application when connecting to the DB, so migrating the DB later would be really easy.

I have very little experience with RAID arrays, is it difficult to setup or can anybody with a basic understanding do this?
Also, what is involved in replacing a failed drive in a RAID 50 configuration? Is it as easy as just plugging-in the new drive and the controller does all the data shifting on its own?

Finally, as I understand it, hardware RAID setups are far superior to their software counterparts; what RAID controller would you recommend if I was planning to build an Intel Xeon system?


I really appreciate your input.
 
If you're buying a Dell server, their PERC RAID controllers are reliable and easy to configure. Replacing failed drives are just a matter of pulling the bad one and sticking in a new one. It should automatically rebuild the array. HP ProLiant servers are easy too and are probably just as reliable.

I don't have much experience with other brands like Sun or IBM so I can't say how easy their stuff is. But cheaper servers like Super Micro, Tyan, etc., have given me more grief than I can stand so I don't use them anymore. They're kinda fun in that you have to figure out a lot of things for yourself (like installing drivers, configuring the RAID controller, etc.) but when you're in production and have a problem, they'll give you heartburn.

The question of hardware vs software RAID will generate some disagreement from people. I use both and both work well. But I do prefer hardware RAID.
 
Thanks a lot fugtruck.

I was thinking of going with Dell, now I'm feeling even better about it.
I'm glad to hear that the RAID array is basically managed for you, as I don't want a lot of headaches once we implement.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top