Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Veritas Volume Manager questions 4

Status
Not open for further replies.

dandan123

Technical User
Sep 9, 2005
505
US
I'm going through the Veritas Volume manager 3.5 documentation to understand how to administer it.

Unfortunately I do not have access to system with Veritas and a disk array to play around with.

I have a bunch of questions to ask and if any of you who have experience with Veritas would answer I would appreciate it.

As an example, lets say you attach a disk array with 16 drives in it.

The first step would be to load the package which comes with it so the drives are recognized right ?

Now once you load the package ( driver or whatever it's called) how do these sixteen drives appear on Solairs ? Will they all have cxtydz numbers ?

Do we have to create slices on these drives like we would normally do on built in drives ?

TIA
 
All depends on what kind an type of storage you have attached.

+ Hardware RAID storage?
- The 'volume manager' is build in the storage so you do not want to use a software volume manager. Only if you want to mirror two seperate storage boxes.

+ JBOD (Just A Bunch Off Disks)?
- Usually you do not need to load any special drivers (it's jbod's)
- All disks will show up in format (after a reconfigure boot or "devfsadm -c disks") as c?t?d0.

You treat the external drives exactly the same as you are used to with build-in drives. (if we are talking all SCSI or if we are all talking FCAL)
 
dandan123;

you would use Suns rm6 with an A1000. With rm6 you can set up your raid with this. you can get rm6 from sun download center.

Thanks

CA
 
The A1000 is a strange one in the Sun storages.
It is a hardware raid but you need the rm6 software installed on the connected server to get it working. (as cndcadams writes) All other Sun Harware RAID boxes work without the need of installing software on the connected server for it to work.

You can even do more with the GUI then on the commandline. (very unusual)
So connect the A1000 to the server, install the rm6 software, fire up the gui and click yourself throug the configuration.
You will end up with lun's showing up in format which you can use the same a normal disk.
 
Thanks for your responses everyone, things are getting clearer now.

Will, can you give me some exampes of Sun Raids which do not require any other software like you mentioned in your post ?

Do these boxes connect using a SCSI port or fiber ?
 
eg. the D1000 is a SCSI (copper) connected Array with up to 12 Disks, you need no additional software to see the disks, they are just new c?t?d?s?

The first step to bring theses disks under VM control is to encapsulate (preserve data and prtition table) or initialize (new partition table -> private and public region, all data on disk is lost) them.

Best Regards, Franz
--
Solaris System Manager from Munich, Germany
I used to work for Sun Microsystems Support (EMEA) for 5 years in the domain of the OS, Backup and Storage
 
Let's say we attach a D1000. Now after a reconfigure boot the drives are going to show up as

c0txd0
c0txd1
c0txd2

and so on.

Now do we create slices on these drives using format ? Typically do we just create one large slice occupying almost the entire drive and a small amount of disk space for the private region required by Veritas ?

 
The D1000 (already EOL'd by Sun) is a JBOD.
Say it is connected to an PCI SCSI card (which needs to be a Differential SCSI card for the D1000!!!), on controller 1:
c1t0d0
....
c1t5d0
c1t8d0
....
c1t13d0

When you initialize the disks within VxVM the partitioning is autom. done by VxVM. (see comment of daFranze)
 
ok, I wasn't clear on whether you had to partition them first before you initialized them with VxVM but your answer clarifies that.

Thanks
 
The Veritas documentation says -

"Each VM disk corresponds to at least one physical disk or partition".

Doesn't this mean that the disks need to have partitions on them before they are brought under VxVM control ?
 
Please bear with me as I stumble along...

After some more reading through the Veritas documentation this is what I figured.

Disks that are encapsulated preserve the original partitioning.

Disks that are initialized - Original partitioning information is lost and each physical hard drive is assigned a VM Disk name.

Is this correct ?
 
Is VxVM typically also used for clustering or is clustering handled by other software ?
 
You use Sun Cluster software together with VxVM or SVM.

Your previous question is correct but was already answered by daFranze.
 
Is VEA just a gui version of vxdiskadm ? Are there any differences in terms of capability between the two ?

Under what circumstances would you use one over the other ?

TIS
 
vxdiskadm is designed specifically for managing disks. I think you can do much more in VEA including creating volumes and resizing them, etc. Personally I stick strictly to the command line tools because they generally do what I tell them to, rather than what they think I want them to do.

Annihilannic.
 
When connecting a jbod, from my reading of the Veritas documentation, a physical disk gets mapped to ONE VM disk is that correct ?

So if you start with 10 disks in a jbod you would end up with 10 VM disks ?
 
I see only two ways of adding disks to a diskgroup -

vkdiskadm and vxdiskadd,

And they both seem to assign one VM disk to one physical disk.

Is it possible to assign more than one VM disk to a physical disk or one VM disk to several physical disks ?
 
No... that's what volumes are for! You can have multiple volumes on a disk, or multiple disks in a volume. I guess it's a question of terminology.

To initialise the disks (i.e. install the partition tables for the private and public regions that Volume Manager uses) I used vxdisksetup -i cNtNdN.

Then to create your first disk group, let's call it "mydg", you would use vxdg init mydg mydisk01=cNtNdN.

To add subsequent disks to that disk group you would use vxdg -g mydg adddisk mydisk02=cMtMdM, and so-on.

To create a simple concatenated (i.e. neither mirrored nor striped) volume across those two disks, you could use vxassist -g mydg make myvol01 30g mydisk01 mydisk02.

mkfs -F vxfs /dev/vx/mydg/rdsk/myvol01 to create a VxFS filesystem, and mkdir /myfs01 ; mount -F vxfs /dev/vx/mydg/dsk/myvol01 /myfs01 to mount it. You can of course still use UFS if you prefer by using mkfs -F ufs or newfs.

Use vxprint -thg mydg to see the magic you have worked.

Annihilannic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top