Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

New SAN help 2

Status
Not open for further replies.

pinkpanther56

Technical User
Jun 15, 2005
807
0
0
GB
Hi all

We are a high school with a limited budget but are looking into the possibility of a SAN in the near future. We’ll more than likely be getting a company in to help with this but I would appreciate a few pointers so I can read up a bit on the technology (so we don’t get fleeced basically :))

We currently have approx 6 Windows 2003 servers that might become 2008 servers next year. They all have internal storage currently and we are running low on space, our current storage capacity is approx 1Tb and expected to grow so we were looking at a NAS but a few people have told up to start looking into a SAN due to an expected increase in our storage requirements.

So basically what would you recommend? If I said we might have approx £10 – 12000 to spend are we dreaming, should we be looking to spend more or could we spend less on a system that could be expanded (that would be ideal)?

Other questions

1. FC or iSCSI and why?
2. What sort of unit could we get (recommendations)
3. Recommended reading?

Any help is very much appreciated.

Cheers.
 
I'd go iSCSI SAN. Many systems can also serve up files via NAS. The difference between NAS ans SAN is the way you connect; NAS provides access to files via file level protocols like CIFS and NFS, whil SAN provides block level access to LUNs via block level protocols like FCP or iSCSI.


Take a look at the Storevault.

 
Hi thanks for the reply

I'm under the impression that a NAS is generally attached to a particular server and a SAN can be connected to many servers and share it (or many servers connect to it).

Is this the case?

I'm thinking that a SAN is more desirable solution am i correct?

Thanks.
 
If your total storage is 1 TB for all six servers combined, why not just buy a disk shelf for each, or at least the ones that need a lot of space?
Just as an example, an HP MSA70 and a suitable smart array board will give you 25 SAS disk slots and quite decent performance. Probably on par with entry level NAS/SAN attached arrays. Each drive can be up to 146 GB, so that's over 3 TB raw.
Compare the cost of an entry level SAN solution, with getting the above setup for a few servers. Not to mention this is bog standard direct attached storage which is easy to just setup and go.
You'll have to consider your overall objectives when evaluating your storage setup, just buying SAN because you think you need it isn't the best approach.
 
Hmm they look interesting.

So i can get one of them and it connects via i-SCSI to one server? That should be ok for our Poweredge servers right?

Can only one server connect to each unit?

Thanks for the ideas.

 
They connect via SAS (Serial Attached SCSI) directly to one host. Quite right, they can only be connected to one server. They're made for HP ProLiants, but it should work with Poweredge servers. You might however want to look into what Dell offers in disk shelves, not too familiar with those.
 
Direct Attached Storage, or DAS can connect via a variety of protocols (usually SCSI or SCSI encapuslated in something else), but is typically a one to one relationship between Host and Storage. Both SAN and NAS can be a many to one relationship between hosts and storage; the major differentiating factor is how the storage is accessed. For block level Access it's SAN, and for file level access it's NAS. Many storage devices can do both, so I understand the confusion. I hope this clears it up.

 
Yes i'm getting an idea of how it works now.

So my problem with a disk shelf is that i would need one for each server (we'll at least three of them), or one big server sharing all of the information on the disk shelf.

I was under the impression that block level Access over iSCSI was better than file level especially as this doesn't require the device to have it's own OS. Correct?

In the end it still seems like a low end SAN might be a better option for future proofing or a decent NAS that all of the servers could access.

Any additional thoughts?

Thanks guys.


 
Well a SAN or NAS device WILL have it's own OS regardless of the protocol used to access the data, but it is mostly a stripped down version of a UNIX/LINUX OS or a proprietary OS. And even if it is LINUX, it will be hidden behind a user interface that still enables you to define/alter LUNs, SHAREs, HOSTs, PORTs, ... so you can administer the storage space and other features in the SAN/NAS box without access to the box's OS itself (makes it next to impossible to break or BlueScreen it ;-)).


HTH,

p5wizard
 
Oh ok are we talking more a SAN will have a very minimal OS a bit like a RAID controller though and a NAS might be a step up from that to a more intelligent OS or do they fulfil very similar tasks?


Any thoughts on my previous question, what route would you go in my situation?

Cheers.
 
It's a minimal OS that implements the featureset. For Netapp, it's Data Ontap.
 
PinkPanther56,

Some things you really need to think about as you move forward in this new "quest" is how are you using the data and at what performance charateristics. Is this data primarily "unstructured" data (office docs, mp3s, pictures, etc) or will it be primarily "structured" data (databases, application data, etc.)? When you talk about unstructured data, that tends to lean much more towards a NAS device (IP-based device). One of the nice things about a NAS device is the possibility to run your CIFS or NFS shares directly from the device and by-passing the need for another "storage server." Structured data by nature lends itself more to FC-based systems due to their usual need for high performance / low latency and higher packet loss protection.

There are infrasructure costs associated with either solution (switching, HBA/TOE cards, cabling, etc), but once you understand your needs you can justify either system.

Something else to consider with IP-based systems, if you put them on your exisiting IP infrastructure (core/edge switches) it will impact the performance of the entire network. Your users could be affected by the additional traffic and take longer to get to the internet or to your internal servers or whatever. At the bare minimum, put the storage system in its own VLAN to isolate the commands and traffic and then attache a second NIC dedicated to that storage VLAN.

Good luck and let me know if you have any other questions.

------------------------------------------------
"640K ought to be enough for anybody."
- Bill Gates, 1981
 
Hi SANEngInCO, thanks for the input.

Can you explain a bit more about the VLAN setup? Or network is all in one subnet at the moment and all of the servers are in one room so i could probably have any NAS devices on the same gigabit switches at the servers.

Most of the data will be unstructured data as all of out SQL Db's are on a seperate server with plenty of staorge.

Thanks.
 
Depending on what type and model your switches are, you have the ability to "isolate" your network traffic within the switches by configuring VLANs. By doing this, any and all traffic that runs within that VLAN will be kept isolated from any and all traffic in any other VLAN. This prevents any bad broadcasts in one VLAN from destroying another VLAN.

So you could have all of your regular traffic (hosts, servers, etc.) in one VLAN, then have another VLAN for all your IP-Storage traffice. It would be recommended that you use different subnets for that new VLAN as well and make it non-routable. The NAS unit should have a management port that you can attach to your regular traffic VLAN and subnet.

Let me know if this doesn't make sense. I'm not a network admin, so you'd want to talk to your network guy (if you have one).

------------------------------------------------------------------------------------------------------------------
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
Albert Einstein
 
For £10-12k you can get a Dell MD3000 iSCSI SAN and a couple of dedicated Gb switches for host connectivity. You might even have enough left over to get a small tape library for centralised backups (one of the advantages of centralising storage).
If your upgrade to Windows 2008 involves a server refresh you'd also then have the storage infrastructure in place for virtualisation, although with only 6 servers that might be a bit overkill.
 
You could get a storevault for half that. The point is shop around.

 
We're actually deploying these to our remote offices. They're really nice boxes, easy to set up and don't require any real "hands-on" after it's up and running. After a drive fails, you just let it go. After a second fails HDS sends you a brand new box, you plug it into the old one, it downloads and updates to the new box and you send the old one back. Pretty slick. Decent price too.

------------------------------------------------------------------------------------------------------------------
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
Albert Einstein
 
looks like a DAS box with a single controller - fine for certain uses but not in the same league as a SAN.

The questions you need to ask is do you want/need to centralise your data/backups and how much data/server growth are you expecting.

If 1TB is going to be enough for the forseeable future and 6 servers satisfy your needs then it's pretty hard to justify a SAN (or even NAS), just use internal or external DAS.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top