Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

HP DL380 Want to expand array in an interesting way

Status
Not open for further replies.

UltraZero

Technical User
Jul 20, 2011
30
US
Hi folks. I have an interesting idea as to how to expand an Array.

Has anyone tried this.

Since we know if running Raid 5 and above, there is a fail safe where if one disk is destroyed, one can simply remove the drive and install a new one to replace it. So. How about this idea.

Say you have 6 drives in a system. 5 are in the array, 1 is an online spare. total capacity of 400 gig. (100 each)

So, what would happen if you removed drive 0 and replaced it with a 1 TB drive. Let the array rebuild and did the same for drives 5 - 1 thru 4. After which you would have 4TB worth of disk space and 1 TB of online spare.

Is this possible??

Maybe the system would not see the logical volume of 4TB but the logical volume of the original so the logical would need to be expanded or extended.

Anyone try this before??

 
UltraZero,
Depending upon several factors, (OS, RAID level, logical volume, drive, etc.) it has been done. The keys are to have verified backups and to wait long enough for the current replacement drive to completely rebuild before attempting to swap out another drive. After the array expandsion is complete you'll need to extend the logical drive(s) or create new one(s), before the OS can see the new space. Then you'll need to do what ever it is that is required by the OS to utilize the new space; create or expand volumes. Realize that some operating systems will not allow you to epand the boot volume (ie. Windows Server 2003 will not allow you to expand the C:\ drive. You'd need a 3rd party app.)

You can read through these threads if you want more details.




Light travels faster than sound. That's why some people appear bright until you hear them speak.
 
Hmm.
Have done this without the spare drive, although for smaller drives successfully. You'll need a battery backed write cache to allow you to extend the logical drive at the end.
However it is going to be a long process since if memory serves when you take the first disk out it will rebuild the array using the spare. You'll then put a fresh disk in and it will then rebuild it again taking it off the spare. And so on.
Would at least double your rebuild time.

I would also make sure the firmware on the server and smart array are up to date. I know there was a bug with some firmware a little while ago that could give a false positive on rebuild state. i.e. Say drive array was rebuilt when it wasn't. Although that was RAID1 rather than RAID5 I think.

Neill
 
Hi.

Thanks for the reply all. I would expect that each drive takes about about 15 minutes per gig so, the rebuild would take a very long time. My only concern is that upon adding the last drive, the jump for the partition will be rather large. I understand the boot partition might be a problem as well. Is the rebuild based on the drive size partition size or amount of data?? I would think it's the drive size. IF so, I would think the jump from 400 gig to 4 TB is going to take a long time. 6 + years..

(15 minutes X 4000 meg = 60,000 /24 = 2500 / 365 = 6.849315068493151)

WOW... If this is true, by the time this is done, i will on a different version of the OS (Windows 10 Server) and a new server HP Dl380 G 10.

LOL..
 
UltraZero,
If 15 minutes per MB is accurate it will still take a while, however, your math is off. You forgot to convert the minutes to hours before converting into years

15 minutes x 4000 MB = 60,000 minutes
60,000 minutes / 60 (minutes per hour) = 1,000 hours
1,000 hours / 24 (hours per day) = 41.67 days


But, I don't think it will take that long. With today's advancements in hardware, my guess is that you could get it done within a couple of weeks. Although, that is completely dependent upon the load on the server. If its getting hit hard 24/7, the expansion process will take a lot longer (unless you change its priority).

Good luck


Light travels faster than sound. That's why some people appear bright until you hear them speak.
 
UltraZero, I'm off a factor, although the calculation is accurate.
15 minutes x 4000 MB = 60,000 minutes
It should be 15 minutes x 4000 GB = 60,000 minutes


Jim,
How did you come up with how long the process took? I'm wondering if it was 4 days out of coincidence, or if you came in to work on the morning of the 5th day and it was finished. I'm curious because you just added 1 drive. The timing of the work day may affect how long it would take to replace multiple drives. For example, if your first drive finishes at 11:00 PM you're probably not going to start rebuilding the next drive until the following morning. In that case you'd lose 7+ hours of rebuild time. Multiply that out by the number of drives in the array and it could take significantly longer.




Light travels faster than sound. That's why some people appear bright until you hear them speak.
 
I wasn't trying to say your case would take 4 days. I was just trying to give you a data point from a real life example.

The 4 days was an estimate of the actual time it took to rebuild, but like you said, I probably came in on the 4th day and it was done. And, I just added 1 drive of the same capacity to the array. I went from 4 1.5TB drives with a hot spare to 5 1.5TB drives with a hot spare. If memory serves correctly, the HP management console might give a percentage done to give you a clue as to how long it will take.

All I know is, I am not used to timing anything in DAYS, on a modern computer. I am expecting things to be done in seconds!

In your case I'm sure it will take a long time and there are lots of chances for something to go wrong. If you have your OS on a separate partition. (Standard config seems to be OS in a Raid 1 volume, and data partition in a separate volume). You would probably be ahead to just backup the data, build a new volume and restore to it.
 
AHHH.

I went back and read the document again. I'm sorry. I made a mistake.
The rebuild time is 15 - 30 SECONDS per gig.

Whew......

Thanks for asking again... I'm glad I went back and review the document.

Sooooo..

Here we go again..

Worse case..


30 seconds per gig
4TB = 4000 gig
30 x 4000=120000 seconds/60= 2000 minutes /60 = 33 hours.

So..

33 hours to rebuild a 4TB array.

Ah.. But there's more.
Now what wasn't mentioned is if the array build was based on the volume size or amount of data in the array, so the numbers are still alittle off.

IF 4TB is the size of the new logical volume, then this would not be right seeing is the logical volume would not be 4TB when trying to do the swap. The logical volume can not be expanded until all drives are in the server. So, the rebuild time has to be based on the current existing volume in the system and the additional space should not be counted.

How's this. Should be a little better to consider.

30 seconds
400GB array

30x400=12000/60/60=3.3 hours Total array rebuild time. using their worse numbers. I chose the use the worse numbers because the rebuild depends on if the rebuild is on a heavily used server. The more IO / processor time that is in used for a production environment, the slower the rebuild will be.
 
UltraZero,
If I remember correctly, the rebuild time is based on the size of the defined array; not the logical volume or the amount of data on it. If you add a new drive you haven't increased the size of the array, yet. You'll do that after all the old drives have been replaced.

Scenario: Replace the current 4x1TB drives with 4 larger drives
So, if you currently have 4x1TB drives, you'd have a 4TB array (rounded for simplicity of the example; RAID levels apply to logical volumes). If you replace drive 1, using your rebuild time from above, it would take about 33.3 hours. When that has finished you can replace drive 2, which will take another 33.3 hours. So to replaced all 4 drives, it will take a total of 133.2 hours (about 5.5 days) of rebuild time. Then you can expand the array and extend the logical volume(s) to include the new drive size.

Granted the rebuild time of any specific drive has other factors involved; firmware versions, drive speeds, server utilization, etc. But generally speaking, the rebuild time is: Time per drive multiplied by the number of drives in the array.




Light travels faster than sound. That's why some people appear bright until you hear them speak.
 
I'll buy that.

Since the array is being rebuilt base upon the number of drives replaced, this amount must be added. Totally agree.

Now.

What about the online spare?? No issues with removing the existing and adding a larger one??

Several ways to do this.

Either remove this until the rebuild is done.
 
Just trying to think logically, I would guess a hot spare wouldn't make a difference as it is just sitting there until another drive fails. Also, it isn't included in the size of the array.
However, I'd think you would have to replace it before expanding the array.


Light travels faster than sound. That's why some people appear bright until you hear them speak.
 
Well, I think this drive even though it is not spinning or in use, has been Identified. The server knows what kind of drive it is when the array was originally setup, So, if you don't replace this drive in the beginning, and rebuild the array, I think the system will have a brain fart if this drive isn't taken care of up front like you said. I remember setting up my system and the drive was identified up front. The array was created and then I designated the last drive(s) as online spares. Once this was done, I rebooted with OS for the installation. I basially gave the system some time to rebuild before I installed the Operating system. The online spare(s) were already dark (not spun up and in a wait state).
I think I would go into the array utility and see if this drive can be disassociated from the system first. Then I would see if I could associate a new larger drive as the online spare. Once I did this, I would try the exchange.

When I purchase some more drives, I will actually try this. My test server has nothing more than an operating system and active directory. Shouldn't be too much of a problem. If it fails, cool. I like to work from the broken end up. Gives me experience.

 
Sooo..

Getting back to this one. I have done 1 test with the following

1 DL380 G3
4 18 GiG hard drives
1 18 GiG had drive to add to an array and allocate the additional space.


Here are my findings.

I built a Windows 200x server with HP Insight manager. Generic install.
Raid 5.

I then installed the 18 gig drive into the server, ran the HP array utility, rescanned the scsi bus and the additional drive showed up.

I then expanded the array with the utility.

This took approximately 30 hours to do.

This wasn't done. I then needed to allocate the new additional space to an
existing partition. This took another 1/2 day minimum. I aborted the
expansion due to the time it took to simply bring the server up to an additional capacity.

I then took the same server, and rebuilt it with OS/Backup Software/Updates/partitions. This took 3 hours. (This was with 4 146GB Drives)

I conclude based on a small installation, the expansion of the array is not something Id want to due. I could restore a full server in less time than it took to simply expand the array.

In an environment where 24 hour operation is a must and small expansions are needed, maybe this would be a fix depending upon how quickly the disk space is needed. Otherwise, I would suggest simply plunking down the cash and buying a new server and putting a ton of disk space on it and migrating to that server. Servers are not as expensive as they were 10 years ago. Also, disk are also faster and cheaper per megabyte compared to times past.

Note. The other thing I was interested in finding out was what would happen if the server were gracefully powered down during this time. The server came back without a problem. Looks like the Controller card is caching/handling all transactions. Server was shut down overnight and no issues when powered up.

I pulled the drives so I think I could go back and finish the partition upgrade. If I do so, I will post what happened after that.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top