Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Migration to CDM MCD4.0

Status
Not open for further replies.

PureChimpie

Technical User
Jul 20, 2009
55
GB
Hi Guys

Has anyone migrated a large network of 3300's to CDM? I have a cluster of 28 elements, tel dir is approx 9,400, all on MCD4.0 but currently managed by Ops Man. I have the task of migrating this cluster.

I have migrated some smaller networks approx 7 elements before but not this large!

Anything I need to be wary of? The migration procedure will be followed ie migration prechecks and database cleansing.

Cheers
 
CDM????

I'd tell you a UDP joke but I'm afraid you won't get it. TCP jokes are the best because you always get them.
 
good luck! i would not like to see the output from the 'migration precheck' command :)

If you are not on 4.0 sp4 then I would do one last upgrade to this on all controllers, then use this partition to migrate to GDM. That way if it all goes wrong you still have your existing partition to 'swap' back to.
 
Thanks for your reply's guys.

CDM - common data distribution model.

Network is on 4.0 sp2, to be honest I don't believe we have time etc to upgrade 28 elements to sp4 :(( I will however explore this as the thought of reinstalling o/s and data restore is pretty scary!!

I take it you can just use the 'swap' command to change back to the partition holding the sp4? Would this also hold the pre migrated db as well? This could be a great get out, in the event of a bad migration!!

Thanks
 
sort of. I would use the SP4 partition to go to migrate and leave the current (known working) partition in tacked until you are happy with the SP4 running in GDM and then upgrade to 5 from there. These things are best not rushed as it will take a lot longer to fix errors caused by a bad migrate.
 
Never migrated that many. I guess management is the key. Thats a lot of elements to have to do at one time.

I'd tell you a UDP joke but I'm afraid you won't get it. TCP jokes are the best because you always get them.
 
Had a meeting with the customer and an upgrade to sp4 is a no goer! I am just going to have to check and double check again prior to the migration. We have a freeze on all systems for 3 weeks to allow a thorough data cleanse. Once all the migration pre checks come back with no inconsistency's I will hit the button and pray it goes well! It will be initiated from an ISS (Sunfyer 4150) server which should help the process along.

Thanks for your replies.
 
4.0 sp4 is the best release to be on before migration due to several fixes with the SDS forms.
Allow plenty of time to make sure you deal with any pre migration check errors on all systems.
Once migrated to GDM (common data sharing) upgrading to release 5.0 SP1 PR1 will take around 1-1.5hs per system depending on system type and network bandwidth.
You can do several upgrades simultaneously from a single PC however leave around 5-10mins between each, so the systems are not competing for FTP file transfers, as this will only slow things down, I normally only do 4 at a time from a single PC/Server
If you can do 4 at a time you are still looking at least 7-8hrs (with no issues to deal with) to complete 28 systems by yourself, or if 2 or more techs can do it at the same time then it will be much quicker.
The last time I upgraded that many I had the assistance of another tech and we managed to get 30 systems completed in 6hrs
Good luck!

Share what you know - Learn what you don't
 
can I ask, as I have had limited experience of this yet. Do have to initiate the db migrate to GDM from all cluster members individually?
 
Hi Bob,
Yes you have to do each one individually

Share what you know - Learn what you don't
 
We upgraded all the elements to MCD 4 SP4 prior to migrating, double upgraded everything so we had a roll back partition. The migration went succussfully, the sync of user and service hosting failed on departments & locations. A corrupt entry in the department assignmnet on 6 of the ISS servers. A data save and restore on each ISS cleared this up and the SDS is working very nicely.

Added a new box to the network last week and all SDS sync's were successful! Happy days!!

Thanks for everyone's input.
 
Supernova, you do not have to migrate each individual system, you click migrate on what you wnat to use as the master node for the operationa nd then this suspends management sessions on all other systems in cluster/sharing in network element assigment forms for the duration of the migration, when completed access is allowed nack into systems.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top