Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Multiple File Systems on VG

Status
Not open for further replies.

rondebbs

MIS
Dec 28, 2005
109
0
0
US
We are getting ready to migrate our Oracle data warehouse from HP-UX to AIX with EMC storage (cx-700). Looking at the HP-UX system there is a volume group that has 5 files systems on it - FSVG has /fs1, /fs2, /fs3, /fs4 and /fs5. Each filesystem has a JFS2 LV. They vary in size from 5GB to 50GB. These look like just misc file systems that are not related to Oracle. They contain things like ftp files, application installation files, data files, old archives etc. Some of the ftp files are loaded into Oracle each night via Informatica.

In the new AIX system I would like to keep my objects to a minimum so that it will be easier to manage. Instead of having 5 LVs and 5 file systems why not combine all these misc directories into one file system with one LV. The LV and filesystem will be the same size as the 5 LVs and file systems on the HP.

Am I missing anything by combining these? Is there a performance or manageability issue? On the HP box all 5 file systems sit on the same set of disks in a RAID 1/0 config. On AIX/EMC I will likely use RAID 5 for these misc file systems and use RAID 1/0 for the Oracle database components.

 
I remember this same question raised a while ago in this forum! i will try to find it for you!

But the best thing for this is to test it!

I had to go through the same thing when we first migrated our database from SP2 machine to the P5570 and i did consolidate 4 filesystems from the old system to only one on the new system! The system didn't go life but the testing is going fine so far!

We are expected to go live next week!

Regards,
Khalid
 
If the FS mount points are as you stated /fs1 /fs2 /fs3 and so on, than you can't really join them in to one filesystem, as that would be the root (/) filesystem, which is in rootvg and you don't want to have application data in the root FS.

If the real mount points are different, and they share a common path name, then it is possible.


HTH,

p5wizard
 
p5wizard, I think rondebbs mentioned FSVG! not rootvg!
 
If the files systems are named as you detailed in your initial post, then restoring them to a single volume group - as 1 file system would be a problem - unless you are backing them up as ./fs2. Secondly - the issue of having just one file system - what if one of those file systems gets some kind of runaway process - and fills the entire file system? This could lead to corruption. I would recommend isolating anything that could jeapardize your data in a separate file system. It is possible that these file systems on HpUx were designed that way to spread the data over disks such that the load was even.
 
I was thinking of one file system /fs with 5 seperate directories below it - fs1, fs2, fs3 etc. They would all be on fsvg which would on a 3+1 RAID 5.

Currently on the HP all 5 file systems are on the same set of disks and in the same VG so the only change would be having one mount point (/fs) rather than five. This also cuts the LVs from five to one.
 
Depends how heavily the files are hit. If there is a lot of IO, then you may want to have each on one disk with its own jfs2log or you might want to do PP striping with each filesystem spread across 5 disks and spread the IO across log files too.

Just depends on the activity. If they aren't used, then concatenate and choose your allocation policy.
 
I am still concerned about your making one mount point and then distributing the current file systems into one logical volume - and therefore one file system. If some file runs away - you will stop everything from working when the file system/logical volume hits 100%. Second Raid 5 is not a very fast way of accessing data on AIX. Raid 1 +0 would be a lot faster. Someone above suggested pp striping. When you set up the logical volumes, the fastest part of the disk is the outer edge and the range of physical volumes should be maximized. What this does is distributed the physical partitions in a round robin fashion over all of your disks - 1 pp at a time to each disk. This allows many writes to occur at the same time
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top