Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

IPO stability / reliability

Status
Not open for further replies.

Steerpike58

IS-IT--Management
Mar 5, 2008
17
US
We have an IPO 406 in CA and another in NJ, both with a 16-port digital station expansion module, a PRI card, and a VCM card supporting about 10 users. In sept. of last year, the CA system (3.2(57)) became unstable - locking up (no response to ping, to Manager, etc) and requiring a hard power cycle to recover. These became more frequent (several a week) and eventually - after much pain with Avaya support endlessly requesting more traces - they replaced the system and it has since been stable.

Our identical 3.2 (57) system in NJ locked up during a simple configuration update (adding user extension, or similar- something that required a reboot), carried out over the WAN from CA. It turned out, the system had lost it's 'default gateway' setting as well a it's configuration, but remained at our network-assigned address (10.10.1.x). Avaya support said we should not try running even simple config updates over the WAN, so we then dedicated a local workstation in NJ to running updates to that system.

Just this week, we did an update to 4.1 in both locations. The CA update completed without incident, but the NJ update locked up during the final reboot. We had to attach serial cable and re-set the box to recover. When recovered, the box seemed to be 'at' 4.1, and ... was not at 192.168.42.1, nor at our assigned address of 10.10.1.7, but rather, at 10.10.1.77 (a DHCP assigned address, we believe). We reloaded our saved config and the system was good to go.

We have 24/7 phone requirements. Is the IPO typically this flakey? I'm now scared to touch the NJ box for simple updates. What are others' experiences? Are we just terribly unlucky?

Thanks!
 
I'll say this about upgrades. There are many landmines to upgrading a clients software. Depending on what software you are starting at will determine the steps you have to take. First, it is inevitable that there would be many steps to upgrade because of the advances in the equipment and the increase in the binary sizes. While it is aggrivating to have to follow all of the steps to a "T", that is what profesionals get paid to do.

I recenly gain a new customer because the previous vendor kept trying to upgrade his system to 4.1 and it kept failing and the vendor couldn't figure out why. He would flash the config try over and over again.

The unit was a 406v1.

 
Ron,
One of my largest profit centers is the cleaning up of other peoples (unqualified "techs", BP's) messes they have made of the IPO. The rampant influx of unqualified BP's(Bend you over Partners), IT UN-professionals, and non-product authorized non-techs who think they can just type shite into the GUI, and walk away has made me money, and the IPO a bad name. So much has this been the case that when I go to a new-to-me IPO site the first thing I do is a back up of everything, so that I can just return the programming, not charge, and walk away to protect my reputation if the customer does not want to do as I advise.
I let them know that if they expect me to take any responsibility for the state of their system then they need to give me the latitude to fix it from the start. In general botched upgrades are one of the greatest causes of problems, and common to the point where if I am going to do ongoing support for a system it is assumed the upgrades were botched prior to my arival. I therefore advise that the system be downgraded, and re-upgraded as part of the intial action plan.
I advise the customer that although I can fix the almost assured long list of questionable, or wrong programming actions, and procedurally incorrect entries already made by simply correcting the input via the GUI. The upgrades that were previously done can not be fixed other than by downgrading, and re-upgrading the system correctly, and unless this is done I can not take responsibility for the performance of their system. At this point I propose starting over to re-implement their system to take advantage of the systems features, and configure it to suit their business plan, and objectives, as this has not generaly been done either instead it is a cookie cutter slam job just enough to get the original installer paid.
Do you have a similair approach, as I see you as a reputable vendor I would like to know how you approach these situations when you know the future performance will reflect on your company. In the original posters case would you propose an entire rebuild of the system as I would since the CFG, and firmware may have been corrupted by the piss poor practices done previously?

 
I think the upgrade process on the IPO is very good, you just have to read the upgrade notes and understand them.

I've actually never had to reprogram a database because of a bad upgrade or some other disaster. Maybe I have just been lucky. I do make it a point to point out the pitfalls of trying to make things right again, and that there may be even more pain than the existing pain until things are fixed.

I always works out in the end. I'm lucky, I have great customers.

 
I have been "lucky" as well that my upgrades have always gone by the book proceduraly, and in outcome. I have however taken over after others have been shown the door, and found the systems not performing as designed without explaination from tech support to be fixed after a default/reprogram, or after a re-upgrade, or re-upgrade/default/reprogram.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top