Your question is not simple--it has innumerable answers. Most people start at the other end of the question: what is the best hardware configuration for $TASK?
Possible suggestions: web server, mail server, domain controller, door stop, database server, file server, boat anchor, game server, etc.
Failing any of the above, you could always mount it on a chain and wear it around your neck, 80s style. Flava-flav will be green with envy.
Alright… here goes the humor…Jkupski - wearing the server around your neck is very "elaborated" even for 80s. I am sure someone will do it… but that is not the question I was asking about. I have no need in the long list of types of servers… just spare it. I need more technical information.
Reading the above I make an assumption that a blade server would be acting primarily as a space savior since all of the types (mentioned in jkupski’s post) servers can be run on stand along machines. Or in my understanding the clustered (database in my example) servers could be the most appropriate to use. With consolidated control and transfer backplanes to simplify scalability. It has fiberchannel switch module (2 ports – 400MB/s each throughput) so why not integrate it with SAN. There are some other hardware fine points, not necessary to list them all.
I can not find more information. That else about blade servers?
You use blades where space is at a premium and you don't need much local storage or expandability in the servers. We don't use them where I work but I can see ISPs and web-hosting compaines being prime users.
All humor aside, Nick's response hits the nail on the head: you use blades when high rack density is a requirement. Generally speaking, there is no real efficiency gain--everything you mention is already possible with various management software, KVM switches, good network design, etc. You're just paying a price premium to have all of that in one (relatively) small box.
You also have a new single point of failure: the chassis. Now, instead of one server going down because of a hardware fault, you can lose ten all at once. Personally, I think you'd have to be nuts to build a cluster out of blades, unless your cluster was sufficient size that it spanned multiple chassis and losing multiple boxes at one time was an acceptable failure mode.
Conclusion: If you're looking for an application for your blade servers, instead of looking for blade servers for your application, you have purchased the wrong hardware.
I inherited the blade server. Therefore I am very limited in the direction of ‘blade server for your application’, just need to find the use of it. Since I do not have a second box I am dismissing the clustering. I agree configuring each blade server on the same box as a cluster node would not provide the desired redundancy. All I am left is ‘single-server dual-storage system’ configuration. Not exactly what I anticipated but for me a step but brings me one closer to understanding the subject.
If I could make the system redundant that would be nice to have all in one box. Isn't it a time savior.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.