Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Zpool configuration, post Veritas

Status
Not open for further replies.

forrie

MIS
Mar 6, 2009
91
US
I need some advice on handling our new zpool (Solaris 10).

We first received the x4540 thumper which came preconfigured with raidz, which worked fine. Then, someone in our dept decided we needed Veritas, which ended up not working out (I hate that product anyway). Now, we're backing out Veritas, flattening the volumes to free up disk space for creation of a zpool, where the data will be re-migrated.

I read a couple of articles on "best practices" out there. It seems that mirroring is the most recommended solution. The server has 30+ terabytes of data, but we are quickly eating it up with the media we store, which is mostly video files and their associated data. These volumes are in turn made available internally via NFS for different purposes.

We've managed to free up these disks for the initial zpool, which I'll then add others to (presuming mirroring):

c1t2d0s2
c1t3d0s2
c5t2d0s2
c5t4d0s2
c6t3d0s2
c6t4d0s2
c6t6d0s2
c6t7d0s2

I read about building your storage using dynamically striped RAID-Z groups of (Y / X) devices. Sounds a little complicated to me.

So before I go in and make a config that I can't easily back out of (grin), I thought someone out there might have some advice/tips about the zpool config I build.

Another side issue is that we've been using rsync to replicate our nightly data (runs hourly or so). We're going to back this out and start sending ZFS snapshots to our remote thumper (similar system), but I've read that there are problems maintaining metadata like NFS handles and such, which must be in order on the remote system (it's a failover). Anyone know the currents status of that issue?


Thanks in advance!
 
I haven't had to muck with my Solaris 10 X4500 much, but this is how we have our pools set up:

pool: diskpool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
diskpool1 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c5t6d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
spares
c5t7d0 AVAIL
c6t7d0 AVAIL
c7t7d0 AVAIL


We have 3 or 4 shared spares spread across the various zfs pools.

Hope that helps.

John
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top