Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SDS Errors

Status
Not open for further replies.

Smallcombe Carl

Technical User
Apr 10, 2017
4
GB
Hi

Customer has created SDS errors to which i cannot resolve by programming. The cluster has 30 odd sites and 16 are effected. I tried debug but the issue persisted when creating the number.

Any ideas please.


nhs_spkx4s.png
 
Isn't the maximum size of a cluster with sharing 20 sites?

I've never hit the limit but I've heard there is one.

**********************************************
What's most important is that you realise ... There is no spoon.
 
No, the limit is higher than that, I think around 200 is the recommended max.
 
Take a look at the audit trail logs for the systems.
Call product support if unable to figure out.

Cluster supports up to 999 elements
Admin group max is 20 nodes
 
Re: Admin Group Max is 20 sites

So, isn't the admin group how you define which sites share with other sites?

Doesn't that mean my earlier statement is true?

**********************************************
What's most important is that you realise ... There is no spoon.
 
No. You can share SDS data with more than just the admin group, normally the entire cluster. While you can have a cluster of up to 999 nodes, SDS starts to cause problems on clusters larger than 200 nodes, depending on the number of SDS data updates being pushed out among the cluster members.
 
A Couple months ago we have the seme problema but harder, here we have 32 systems in the cluster, and they are producing around 2000 sds logs, the limit for the administrative groups is 20 but you can créate 2 or more administrative groups, this don't have to be nothing with the cluster limits (until 200), thera is a manual where you can check the conditions to avoid this logs and how to resolv them, the name is migration to RDN /SDS extended mode if you need it and you don't have a way to have it please send me your mail, over there you will see the conditions to arrange the cluster and to avoid this particular logs, in my particular case the equipments around the cluster have diferent country options and that's the first condition they have to accomplish between others, i think you have to make a couple of resiliency level syncs, and to run the gdm check and repair commands to fix the db, because this logs are generated when the db's around the cluster are not consistents, good luck!!!
 
Resolved it. went into Network Elements and did a Sync and this has resolved it. Not sure of purpose of the SDS Sync as that did nothing.
 
We have been told to just delete the sds errors

If I never did anything I'd never done before , I'd never do anything.....

 
Billz66 Why would you just delete them as they could be not programming the remote directory, then no internal calls will be made to the numbers effected in the SDS errors.
 
we have a large cluster

Central mivb and micollab with flow through provisioning and 18 remote mxe's

in normal operation sds erros occured almost every time a change was made
when creating large numbers of users there were 100's , some of them resolved themselves if left long enough , some remained.

advice from mitel was to wait for a while for as many to self resolve but never retry because normally the sds error had in fact been resolved already and retrying can cause more issues
- my thinking at that stage is why show us sds errors at all
Then delete them all
if you have issues run gdm repair ( can run this at any time )
and if issues still exist do another full sync

doing gdm repair and sync seems to be the best method of confirming that all data is correct

If I never did anything I'd never done before , I'd never do anything.....

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top