MitelInMyBlood
Technical User
I think we talked about this once before, but can't find the thread now.
Am I the only one having to periodically reboot my EMgr server because OpsMan is locking up? Seems like once every 2~3 weeks OPS will simply stop working and won't let you log in. The initial java screen pops up but the client won't start. Eventually it times out w/an error msg to the extent that it says OpsMgr is not running.
I've checked the server logs. There's nothing there. No errors. I've checked the services, everthing's running fine. No problems in EMgr either, all sites access OK, only the OpsMan blade is hung.
The only fix seems to be a server reboot.
My TAM tells me there's "a couple" tickets open on this, but does anyone know of a fix? Doesn't happen often, but seems to bite us when we're busiest and swampped with MACs and unable to devote time to waiting in the hold queue and capturing traces. I realize that's little help to prod. supp. but we don't notice it's locked up until we need it and at that point it's a crisis because midday moves are underway. We've got to boot it and move on.
A related symptom is that *sometimes* when OM locks up the server will complain that the SAM is missing. This does not occur coincidental to every OM lockup, but maybe a third of the time. Most times a server reboot will get us back in operation. On those occasions when it can't see the SAM we have to unplug and re-plug the SAM, then reboot.
There does not seem to be any causitive action, such as periods of high MAC activity versus low. It can be working fine in the morning and be locked up after lunch for no apparent reason and nothing in the server logs to indicate a cause.
Any ideas?
Am I the only one having to periodically reboot my EMgr server because OpsMan is locking up? Seems like once every 2~3 weeks OPS will simply stop working and won't let you log in. The initial java screen pops up but the client won't start. Eventually it times out w/an error msg to the extent that it says OpsMgr is not running.
I've checked the server logs. There's nothing there. No errors. I've checked the services, everthing's running fine. No problems in EMgr either, all sites access OK, only the OpsMan blade is hung.
The only fix seems to be a server reboot.
My TAM tells me there's "a couple" tickets open on this, but does anyone know of a fix? Doesn't happen often, but seems to bite us when we're busiest and swampped with MACs and unable to devote time to waiting in the hold queue and capturing traces. I realize that's little help to prod. supp. but we don't notice it's locked up until we need it and at that point it's a crisis because midday moves are underway. We've got to boot it and move on.
A related symptom is that *sometimes* when OM locks up the server will complain that the SAM is missing. This does not occur coincidental to every OM lockup, but maybe a third of the time. Most times a server reboot will get us back in operation. On those occasions when it can't see the SAM we have to unplug and re-plug the SAM, then reboot.
There does not seem to be any causitive action, such as periods of high MAC activity versus low. It can be working fine in the morning and be locked up after lunch for no apparent reason and nothing in the server logs to indicate a cause.
Any ideas?