Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

NAS system with JBOD

Status
Not open for further replies.

fugrosesl

Technical User
Jun 24, 2008
1
0
0
US
My company does alot of aerial data processing and collects alot of hard drives during projects, typically several 500GB per project, as well as back-ups.
In the field we use a 4 bay Wiebetech RX system with individual eSATA connections to allow each hard-drive to be added as a seperate drive to a workstation. There is no RAID on this enclousre and it allows drives to be hotswapped as they become full and/or processing is completed.
In the office we have a need for multiple users to access each of the hard-drives across a network and I am looking for a NAS solution that will allow us to do a similar thing, without having to attach the enclosure to a specific workstation. So the NAS system should be able to take 4 (or more) hard drives (preferably without a tray), there will be no RAID, and the drives should be hot-swappable so that as work is complete and new data arrives at the office the hard-drives can be taken out and replaced with different ones. I would prefer it if the unit is desk-top mounted, i.e. not rack mountable so it will sit in the production work area not the server room.
Not sure if I am asking too much but I am yet to find anything out there that may satisfy this. Any advice as to whether this is possible or not, and pointers to a suitable solution would be much appreciated.
TIA
Dave
 
Buffalo Technology ( would be my first place to look for something like that. They have several stand-alone NAS units that take drives into their "slots" and are hot-swappable. Iomega ( I believe has one or two as well. I know there are others out there but these two come to mind as I looked at them for personal use as well. They both can do RAID as well as JBOD (what you want to do it sounds like).

------------------------------------------------------------------------------------------------------------------
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
Albert Einstein
 
Check out Norco's solutions here.

As a footnote, I would avoid spanning disks (JBOD) as sometimes parts of one file will be spanned across two disks. Should one of the drives fail, even if backups are done recovery could be difficult.

RAID 5 would be more robust and is hot-swappable. Depending on the controller, you should be able to add drives to the array as needed, and only reserve a single drive for parity.

Tony

Users helping Users...
 
If you're prcessing images, then the files tend to be large, tend to be written rarely, and tend to be read frequently. So, given a relatively large IO size (64K or so), and a high read/write ratio (above 4:1), RAID 5 could be a good solution in this case. The downside is of course rebuild in the event a drive fails. Your performance would be degraded for the period of time it took to reconstruct the failed drive. Still, it's a lot better than just losing the data altogether.

RAID 5 is generally a good solution when you have a high read/write ratio and need a lot of space. RAID 6 adds another parity slice, which protects you against double drive failures. If you're going with SATA disks, which have a higher failure rate, I'd consider RAID 6. If you're going RAID 5, I'd look into SAS drives. In any event, when you select a RAID type it's important to know the characteristics of your application workload. The basic rule of thumb is: If the write penalty is greater than the read/write ratio, then that RAID type is a poor choice.

 
1 remark I have here : desktop mounted.
When you are talking about storage systems, what directly comes to mind is powerconsumption,spacerequirements,heat/cooling,dust and noise.
1.With raid arrays, disks are grouped into diskshelves,and the more capacity you need, the more shelves you will have.

2.Disks produce heat,as well as power supplies.This means that you will need to cool the devices ( airco ).

3.When talking about servers/ storage, we think about No-Break and UPS as well.Most production environments do not have no-break everywhere or UPS power.

4.Noise : reason why people place their servers/storage in a datacenter is also the constant noise it generates.
If people are working in the proximity of these servers constantly, they will suffer hearing damage eventually.

5.Directly related to the noise is dust.Reason why a storage box makes lots of noise, is the colling fans within the controllers.They act like big vaccuum cleaners.So this is why people place their server/storage farm in a clean and near-dustfree datacenter.Otherwise the fans would break down due to dust and dirt, resulting in insufficient cooling and hardware failures finally.

So imho I do not think it's such a great idea to be putting storage controllers in your production environment ( non-rack or non-datacenter )
 
One of these:

directron_2009_243847301


Will add five drives to your desktop in (3) 5.25" device bays. They are not silent, but don't make much of a ruckus either. Details are here

There are many, many others available.

Tony

Users helping Users...
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top