Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

NDS Partitioning across WAN

Status
Not open for further replies.

tekieVB

MIS
Apr 25, 2002
112
US
I have a customer who has a NetWare 5.1 network with 26 remote branches. All sites are connected via T1. I am in the process of getting all sites up to date with SP7 and setting up TCP/IP along with DHCP.

Right now "All" Master copies of partitions are at the HQ location on one central server (HQ only has one NW server) with R/W partitions spread throughout the network according to the (4) partitions they currently have setup. I thought this was a poor design since all replication with the Master partitions had to route back to the central office, so I was going to recommend moving the Master partitions back to some of the remote offices, so not to have one single point of failure. However, I found out that their Frame Relay cloud is not MESH, so all sites have to come back to the corporate site to route to other remote sites <not sure if I am using the correct terminology in talking about the WAN connectivity>.

So, I am not certain if I should change the NDS partition structure at all. I was trying to reduce the NDS replication traffic back to HQ and to get away from a single point of failure, but don't think I can do that with their current FR configuration.

NDS partition right now is this:

NDS_TREE - Root(Master for All 4 parts)(2 R/W Sites 1 HQ)
|
|
-- Partition 1 (4 R/W Sites/ 2 SR / 1 Master HQ)
|
|
-- Partition 2 (2 R/W Sites / 1 SR / 1 Master HQ)
|
|
-- Partition 3 (5 R/W Sites / 2 SR / 1 Master HQ)
|
|
-- Partition 4 (13 R/W Sites / 3 SR / 1 Master HQ)

* Partition 4 is the largest it has 13 new sites and is growing * It is structure along their buisness

On top of this all sites have a bindery context set - some application that they run will not run without a bindery context being set.

Any ideas from you other guys would greatly be appreciated!

Still in the thinking mode right now. Also, I am changing the IPX routing protocol to RLSP and getting all servers setup on TCP/IP with SLP DA setup at HQ. Eventually we will remove IPX all together.
 
It looks like you are contradicting yourself.

You can't have both reducing replication while keeping away from single point of failure.

We have similar structure ( star topology ).
Here is our structure.

Every parition have 3 replicas. 1 at headoffice, 1 at a redundancy site and 1 at the local branch. The one in headoffice is always the Master and that is our main NDS Master server and we chuck in lot of memory. Our tree actually supports multi-companies. The only thing we always make sure is only related replica will be placed for the company's server.

It is easier to manage if all master replicas located at one place because it is the server does the primary NDS changes like splitting partition etc.

For any object used by local site, you probably want it to be at the local branch. If you have resource needed to be shared globally, you can reduce the replication.

If you network is a star not a mesh, I can't see how you can reduce replication without redesign your NDS object resource layout.

You should design the partition so that we can limit what is needed for the local branch so that client does not to refer to headoffice for information.

Regards.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top