Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

2800 Config Question ~ Perc 4e/DI

Status
Not open for further replies.

jmcqueen

Technical User
Aug 27, 2008
6
0
0
US
hi all:

first, let me say that i am not really that technical when it comes to server hardware.

Option 1 Goal: i wish to migrate a 2x73gb raid1 mirror (CentOS) to a 2x146gb raid1 mirror, but want to preserve and copy the existing mirror over to the new disks.

Option 2 Goal: keep the existing 2x73gb raid1 mirror and add a 2x146gm raid1 mirror (2nd array?) as a storage option to back my other servers up. there are a total of 3 server that i would like to have complete backups for in order to due a bare metal recovery if it would ever be needed.

my existing pe2800 is configured with a single 1x8 backplane and perc 4e/di raid controller. what is the best way to accomplish either option above?

thx in advance...
 
Option 1: Image the server. Remove old array, configure and initialize new larger array. Restore image. Resize partitions.

Option 2. Stick more disks in. Create new array.

Upon bootup, the PERC will post and state "Hit F-something for PERC utilities" or something to that effect. You will create arrays from there.


--
The stagehand's axiom: "Never lift what you can drag, never drag what you can roll, never roll what you can leave.
 
Crap---guess that's what imaging the server would do...lol..sorry

Burt
 
Option 1 Goal: ....
Place the 2 new 146 gig drives in available slots, create an additional new raid 1 from the 146 Gig disks in the raid bios (ctrl-M). With 3rd party software, clone the 2x73gb raid1 mirror to the new 146 Gig mirror, informing the software to expand the original source to use the full capacity of the target..cloning is easier than imaging/resizing partitions. Run chkdsk /f on the source before cloning or imaging.

Option 2 Goal:
Place the 2 new 146 gig drives in available slots, create an additional new raid 1 from the new disks in the raid bios (ctrl-M).

Dell Perc raid setup...



........................................
Chernobyl disaster..a must see pictorial
 
thanks everyone! technome, for linux what software would you recommend to clone the existing 2x73gm raid1 mirror?

thx in advance.
 
technome said:
cloning is easier than imaging/resizing partitions.
Good point. Once cloned, would you need to change the SCSI boot order?


--
The stagehand's axiom: "Never lift what you can drag, never drag what you can roll, never roll what you can leave.
 
"Good point. Once cloned, would you need to change the SCSI boot order?"
Good point
Depends, Acronis seems to automatically start the "destination" up automatically after cloning ( which I do not like). If not the case with other software, then you would need to go into the raid bios and set the new array as the booting array (easy to find and do).

JMCqueen, you do need to verify the new cloned array will startup before storing if for an archive.
Before pulling the old array as an archive or reusing it for the likes of added storage, let the new array run for a couple of weeks, just to make sure the new drives have no manufacturing defects. The Perc 4E does Patrol Reads which is important, this checks the disks' entire surfaces for defects, which the is not done by the raid adapter inheritantly except during rebuilds ( which is why multiple disk failures generally occur during rebuilds). Make sure is is set to run automatically or from a scheduled command line

For an "instant restore".....
Once you have your new larger raid 1 cloned... for an almost instant bare metal restore, get another new 146 Gig drive, pull one of the raid 1 drives, place the new drive in, which will rebuild automatically. The pulled drive becomes an almost instant restore clone of the raid 1, as if all else fails, this drive can be introduced to a raid adapter, which will be accepted as a degraded raid 1, once another disk is added, the raid will rebuild very fast. Last time I did this the rebuild took about 5 minutes with 73 gig drives, unlike parity raids which can take hrs to days to rebuild. I have a client where I do this for "instant restore" and if I need to do MS patches, SQL patches or drive firmware upgrades, where I figure something may go wrong.


Never used it for Linux but Acronis makes True Image server for Linux... have Nitro Bid handy when you see the price.
Surely cheaper than rebuilding servers from scratch.


Lastly, which is no concern to JMC, if drives are speed across multiple channels of an array adapter, caution needs to be taken as arrays drives cannot roam across different array channels. In the Windows server environment, if the servers are AD, tomb-stoning becomes an issue.


........................................
Chernobyl disaster..a must see pictorial
 
the new drives on this linux CentOs5 box are actually going to be 15k 300gb scsi, which are an upgrade from the 10k 73gb scsi's. can't i just plug a new drive in and let it rebuild the to the new disk?

many thanks...
 
Yes you can, but you'll end up with a 73Gb array. AFAIK, once an array size has been established it cannot be changed, it must be recreated.

--
The stagehand's axiom: "Never lift what you can drag, never drag what you can roll, never roll what you can leave.
 
ok, so here is what i ended up doing on my raid1 array on my CentOS 4.4 box (i was sure it was 5.0, but doh!):

1. image /sda1 and /sda3 with rescuecd onto a 500gb mybook usb drive (cloning would have been better but i don't have the $'s for acronis)
2. pulled existing 2x73gb scsi's and put the new drives in
3. booted blank system, CTRL-M to update perc raid to new drive size
4. inserted CentOS original install disk and did "minimal" install ~ partitions like the original config: /dev/sda1 /boot ext3 500mb, /dev/sda2 / ext3 fill disk, /dev/sda3 " " swap 2000mb
5. rebooted rescuecd and did #partimage and restored the original images to /dev/sda1 and /dev/sda3
6. removed rescuecd and rebooted. server fired right up with 0 errors (lucky me!), but size on /dev/sda3 still showed as 69gb (bummer!)
7. since this was a CentOS 4.4 box, i logged in as root and did:
# ext2online -d -v /dev/sda3
(had it been a CentOS 5.0 box, i would have used resize2fs)
8. ext2online grew the /dev/sda3 to max size. took a while 'cause i decided to go with 300gb scsi's
9. rebooted and now Ensim X admin panel shows:

8% Used
23832.906 / 279126.555MB Used

so, now i have the full 300gb to work with for backup of other servers i have with bullet-proof 15k.5 seagate 300GB scsi's.

what i have learned: this is not an easy project and most of what you find posted in forums will probably not work on your specific box. and, since i had to do a full re-install of the OS, i know this is not the preferred route. and, if you don't have physical access to the box, hire a pro that knows what they are doing.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top