I have many disks arrays connected to servers via switches,
nos I would like to centralize (connect together) SANs and
at the same time whole storage. How can I face this issue.
What sort of disk arrays and switches? Are you talking about servers having fibre SAN switches and separate SAN storage already, or do they have internal storage?
Thanks for your answer,
-Disk arrays are CX300, CX500, CX3-40, CX3-80
-Switches are Brocade DS200B for most of them and
others Brocade integrated in a Blade Chassis.
what you can do , is invest into a core edge setup, with 2 big core switches (directors or so) in a fabric, and then plugin the other switches onto that fabric.But beware, this will be a costly operation, but is the only way to build a central and stable SAN.
Thanks for your answer,
How to make zoning in such a configuration ?
How to shift with systems in production ?
I suppose that switches firmware versions should
be upgraded.
you can create zoning through the gui or through command line ( brocade is "zone create" command ).Implementing this san will be a migration,with downtime. Upgrading firmware will fix bugs,but keep in mind that most storage vendors have a certification matrix, and upgrading can cause to not being compliant with their matrix,and risk having no more support for the connected systems.
Some thoughts:
- Are all switches "current" with their microcode? Probably not...
- Are all switches set to the same SAN interop mode? Are there no conflicting interop settings and/or configuration parameters? (I doubt it)
- Are all switch zones compatible with each other (I doubt this very much)
- Are the distances between the different switches compatible with SW fiber and if not, do you have the necessary LW SFPs and fiber cables?
- Do you have the licenses to do SAN trunking between new core switches and existing - to become edge - switches.
How do you go from existing setup to core/edge setup with systems in production? If your current setup is dual SAN fabrics with some sort of path failover on the servers, you can leave 1st SAN fabric as it is (after verifying that all paths on both fabrics are functioning); merge 2nd fabric in core/edge design both on server and SAN-server side, adapt zoning and re-discover the paths on the servers;
make sure new paths are functional before doing same action for 1st fabric.
If you are not comfortable with all these actions, get help from a SAN specialist!
I totally agree with p5wizard here, if you have good san experience, you can do this by yourself, and it could be done online,only if you currently have multipath through different HBA's.You could then remove 1 path, and hook it into the new SAN.verify the connectivity to that new path, and if all is OK, hook up the second hba to the new san.But it would be wise to have a san expert in your neighbourhood when you do this. Because sometimes a san behaves nice the first moments, and then starts showing errors (timeouts, ... ) as more and more user load comes onto it.This is mostly caused by small details that have been overlooked.If you order new material, allways be sure to order some extra SFP's and fibers, it wouldn't be the first time some gear is dead on arrival when you need it during a migration.
Regarding this process I have 2 questions for my information :
1) Is it advisable to have 2 different Brocade switches for redundancy ?
2) How can I link 2 DS200 switches to get 2 x 16 ports
(minus the port sused for linking)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.