Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

bringing up an old thread have a 33

Status
Not open for further replies.

Matty269

Systems Engineer
Nov 20, 2017
20
US
bringing up an old thread have a 3300 mxeiii controller running rel 8.0 ver 14.0.93 sata raid recently it just locks up phones loose connection and no access to esm a power off reboot brings it back for a week or two then it happens again Mitel recommends drive replacement in a pair I was able to see it happen last week and the raid card is flashing SOS on the first drive . I know powering up with no drives clears the sockets just not sure if I use new drives if the password and IP info stay with the controller or the bad drives? I just did a wholesale change out of a new controller and drives moved the i-button over and had to re pgm ip info sync to amc and then restore from a backup.

question 1 what stores the IP info and password ?

question 2 if only the bad drive is replaced the THB is kind of vague I would power down remove the bad drive repower and let the system boot one the good drive that should clear the socket for the bad drive once the system is up plug in the replacement drive same type and size as the old ones. the raid should start to mirror drive automatically and take about 2 hours 80g per hour from the THB . do we think this method might mirror currupt or bad data from the old drives?

thanks for any input

 
make sure you have a backup

Q1
IP address is stored in vxworks

best method
remove both drives
clear sockets
install 1 drive do software install , upgrade cert to get amc sync to work
then restore backup
then install 2nd drive and the mirror should be created

**********
Q2
Dont just remove 1 drive , always replace in a pair and the drives should be as identical as possible
if one drive is failing there's a good chance the 2nd one is on its way

Dont just get look for any old drives that might be the same size
as the raid controller sometimes doesnt recognise the 2nd drive if they are even slightly different sizes

best to buy 2 drives same size from the same manufacturer
can use SSDs


If I never did anything I'd never done before , I'd never do anything.....

 
thanks I almost forgot about the updated cert yes I have a twin pack of drives with the same software revision as the original drives
so do I have this right
1. connect to maint port with my laptop
2. power down controller and remove both drives
3. power up and watch for socket cleared message
4. install 1st replacement drive with software wait for system to boot
5. load updated cert
6. sync with AMC
7. restore from latest backup
8. install 2nd drive and let raid mirror should take about 2 hrs 160 gig drives
would it be best to do the mirror when the system is not busy with calls? so that the drive replacement and the mirror are all completed during after hours
Thanks again

PS when I went to bring back up this old thread it said it would link it to this thread but It didn't here is the old thread which is closed now

 
from memory i dont think the mirror create puts much load on the working drive
Also wont you be installing after hrs anyway as system will be offline for a while ?

most important is having a good backup

If I never did anything I'd never done before , I'd never do anything.....

 
one more question
down time is hard to get, I have access to a controller that is the same software level and currently not used it has basic licenses can I used that controller to install a hard drive load the new certificate, then sync to the amc and then restore the database from the production controller ? does the licensing/hardware have to match to restore if possible that would limit down time just clear the sockets and install the drive with the cert and backup all ready on it ? eventually this unused controller will have pstn connections and will be a redundant server to this production server
 
you can restore onto different controller

BUT
licenses need to be same or higher
hardware needs to be the same

if either are different, the data will be purged

for example if original had 50 ip users and temp had 30 . 20 wouldnt be restored
if original had e1 card and temp didnt, programming for e1 would be purged

why not overlicense , restore backup and swap
- use as temp to keep services going whilst you rebuild the actual one. then swap back

If I never did anything I'd never done before , I'd never do anything.....

 
Billz66
thanks for the info not sure I guess I will just do the drive replacement has listed above and go from there
 
update
tried replacing drives but I think there may be a problem with the boot string the system was 7.0 and was updated to 8.0 sometime back after clearing sockets the raid sees the drive and starts to boot then says can't load boot file i think the controller is looking for the updated partition and not the original partition and i guess the new drives only have info on the original partition and not the updated one researching changing the boot string for now have system up Thanks again guys
 
Likely boot file need to have /partition1/RTC8260 instead of /partition4/RTC8260
 
update Thanks for all your help guys
I used the customer unused 3300 mxeIII controller and moved all the cards and I button over and changed the ARID then sync'd licenses then did a restore from the latest backup system up and running 100% now the old controller on a test bench tried to stop the auto boot from the maint port would not recognize the space 3X to change the boot string, not sure if that is a com port issue on my laptop or a putty issue so i let it boot up and then did a swap in the ESM maint commands to roll back to the older software which moves the boot string back to the old partition then cleared sockets and installed 1 new drive #1 loade the new cert sync'd the licenses then did a restore on it as well then plugged in the 2nd new drive #2 and the raid rebuilt automatically took like 45 minutes both new drives had software on them so both controller are alarm free and up and running

THANKS
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top