Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SBC PPM

Status
Not open for further replies.

wpetilli

Technical User
May 17, 2011
1,877
0
36
US
on my SBC I have 3 IP's running on B1. For registration, PPM and file xfers. Been running fine, but the PPM has stopped. Our DNS probe failed this over to another SBC. Is there a way to restart just the PPM on this SBC w/o restarting the SBC application?
 
yeah. sems is the management service for an ems, ss is the sbc service. nginx-data does the proxies and nginx-mgmt holds the config. Typically when you edit a proxy, it restarts nginx-mgmt to take the change without bringing down nginx as a whole

On my 8.1 box, I can stauts, stop, start, restart these:
Code:
[root@SBCE ipcs]# service nginx-data status
● nginx-data.service - LSB: Manages the SBC instance of nginx
   Loaded: loaded (/etc/rc.d/init.d/nginx-data; bad; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-28 11:10:53 EDT; 1 months 24 days ago
     Docs: man:systemd-sysv-generator(8)
  Process: 13149 ExecStop=/etc/rc.d/init.d/nginx-data stop (code=exited, status=0/SUCCESS)
  Process: 13054 ExecReload=/etc/rc.d/init.d/nginx-data reload (code=exited, status=0/SUCCESS)
  Process: 13357 ExecStart=/etc/rc.d/init.d/nginx-data start (code=exited, status=0/SUCCESS)
 Main PID: 13397 (nginx)
   CGroup: /system.slice/nginx-data.service
           ├─13397 nginx: master process /usr/local/nginx/bin/nginx -c /usr/l...
           ├─13398 nginx: worker process
           ├─13399 nginx: worker process
           ├─13400 nginx: worker process
           └─13401 nginx: worker process

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
[root@SBCE ipcs]# service nginx-mgmt status
● nginx-mgmt.service - LSB: Manages the EMS instance of nginx
   Loaded: loaded (/etc/rc.d/init.d/nginx-mgmt; bad; vendor preset: disabled)
   Active: active (running) since Tue 2020-04-28 11:08:31 EDT; 1 months 24 days ago
     Docs: man:systemd-sysv-generator(8)
  Process: 10840 ExecStart=/etc/rc.d/init.d/nginx-mgmt start (code=exited, status=0/SUCCESS)
 Main PID: 10940 (nginx)
   CGroup: /system.slice/nginx-mgmt.service
           ├─10940 nginx: master process /usr/local/nginx/bin/nginx -c /usr/l...
           └─10941 nginx: worker process

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.


 
we have dns names that resolve to the public IP's for the signaling, filexfer etc.. our DNS probe alerted it couldn't hit 443 on the public IP, but only for that function. All the signaling and phones remained fine and functional. Just no buttons. All of these IP's are on the B1 interface too, so kind of odd.
 
stat the service, you can tshark -i any port 443 and see what happens.

I've seen firewalls allow through TCP SYN packets to the SBC, but never actually passed the client hello from the phone, so you look for packets that come through, but with very few bytes and certainly not the beginning of a TLS handshake.

Otherwise, you should see a client hello hit 443 on a pcap and if the SBC doesn't answer or do anything with it, then you're pretty sure nginx puked.
What's the full version string in the EMS?
 
7.2.2.2-04-17187

I wound up restarting the application before I saw your message. Not sure if I have a config issue, but that DNS probe sends that traffic to our other SBC when it doesn't get responses on 443. So, the signaling remained fine on the primary SBC but the PPM portion failed over. Should that theoretically work in that state where these services are split?
 
Not really. You should be coming from the same A1 IP for both. SM won't have a fun time seeing your SIP from 1 IP and your PPM from another.

So, how do you have that all setup?

On 8.1, you can use the "listen domain" of reverse proxies to pick where the traffic goes.
So, I have all my FQDNs for AMM, AADS, Presence, etc on 1 IP and the reverse proxies have listen domain "aads.kyle.com" and if it hears on port 443, then the AADS proxy sends it to AADS and so on and so forth.

Maybe I haven't played around with the SIP phones to force the issue, but I find they always ask for PPM on a IP address. IX Workplace will ask via a FQDN.

In the reverse proxies, if I use "listen domain" for a bunch, if I add another with no listen domain - like for PPM - because hardphones are asking on a IP and you can't put a IP in there, then that reverse proxy overrides the rest, so I can't use PPM and all other UC with 1 IP.

So I use 1 IP for SIP/PPM, 1 for everything else. That's for data center 1 to get you to your primary SM. Repeat with 2 more IPs at data center 2 for your secondary.

So what's your DNS probe doing? Something like "sip-primary.lab.com" and directing it to a primary public IP if it's answering and otherwise directing to a second public IP? That's what I'd do for data center failover for anything UC but not for the SMs at least insofar as how I'm understanding it.
 
on SBC 1 I use the B1 interface with multiple IP's. Signaling and PPM on 1 and file xfer on another. I got the DNS alert only on the file xfer IP/name, so I didn't think much of it. The fact that signaling remained up and PPM failed is a bit of a mystery. I'd have thought they would have both failed.

I think I need to stand up another set of SBC's to be able to test different things. This was the 2nd time this SBC was acting funky and the application restart fixed it.

 
For the probe.. yes.. for the file xfer stuff it probes 443 and if there's no response it points it to the IP of the other DC SBC. For the signaling and PPM it probes 5060 and does the same thing. In this case the probe failed on 443 for the file xfer, but we didn't see any alerts for the 5060 IP. Unless PPM uses 443 as well and we don't have a probe set for that port. I guess that would make sense, but odd that only 443 would start failing across the IP's but not 5060.
 
It makes total sense. Nginx is web stuff. 80/443 = nginx in the SBC. The SIP stack is different.

If you can make the rules, then I'd say "failure on either 5060 or 443 for the external SM SIP IP means send both 5060 and 443 to the secondary IP
 
make you an tracesbc with flag on http traffic? If so, than the normal http Traffic between B1 and A1 is not working anymore in latest SBC Release 8.1.
service nginx-mgmt status highlighted some entry in red.
service nginx-mgmt stop, than reload, that start did help for me without monitoring tracesbc with http traffic in the future.

Obviously it is of eminent relevance, that I this, what you celeprate, not optimally effective assume, since the integrate of you in the communicative system as code related terms with me no explosive associations in mental-empirical reproduction process of the mind.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top