Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SDDPCM path selection

Status
Not open for further replies.

trifo

MIS
May 9, 2002
269
0
0
HU
Hi!

We succeed to obtain a quite complex SAN environement with Shark ESS 800 storage server including 8 FC adapters and 3 FC adapters in our servers. This causes 12 paths to show up in "sddpcm query device" output for every LUN.

Some guys told me that 12 paths too many for the SDD driver to handle effectively thus I shoul have to disable some of them, and let open - say - 4 pathes for every server.

What do you think about it?

Thanks,

--Trifo
 
That's pcmpath, not sddpcm imho...

pcmpath set device dd path pp offline

will disable a specific path for a device, but you'ld need to that that X times for all your LUNs. And you'd need to do that again after a reboot I think.

I'd prefer to "hide" specific LUNs from certain ESS host adapters and/or server FC adapters. You can do that in the ESS specialist, or you could set up zoning to accomplish the same result.

I am assuming you show the LUNs to the server via 4 host adapters to all 3 server FC adapters, hence 12 paths per LUN. If you show each LUN to only 2 host adapters (even out the usage of the host adapters - alternate between 2 sets of 2 host adapters) and 3 server FC adapters to get paths down to 6 per LUN. Or better yet install a fourth adapter in the server - makes load balancing a bit easier: then you can alternate between 2 sets of 2 adapters on the server side also)


HTH,

p5wizard
 
Hi!

Yes, you are right, that command is pcmpath, and the driver is called sddpcm.

Installing one more FC adapter is not an option.

As far as I know, 12 paths is a limitaton built in SDDPCM. It is fairly sure that we have 8 FC-s in the ESS and 3 FC-s in the server.

The question was if I let all 12 paths open, can it draw any performance problem or not. For me it is not a problem to read long outputs of pcmpath or similar commands. But a performance drawback or stability risk is a pain in the ass.

--Trifo
 
Well, you may have LUNs defined for all 8 host adapters on the ESS side and all 3 FC adapters on the server side, but you are wasting FC and/or PCI bandwidth and load balancing + failover capabilities I think.

Can you show pcmpath "query device 1" and "pcmpath query adapter" output please?


HTH,

p5wizard
 
Here is the story:

[tt]
root@c51 / pcmpath query device 5

DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2105800 ALGORITHM: Load Balance
SERIAL: 11730165
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 134639 0
1 fscsi0/path1 OPEN NORMAL 134708 0
2 fscsi1/path2 OPEN NORMAL 133982 0
3 fscsi1/path3 OPEN NORMAL 134584 0
4 fscsi2/path4 OPEN NORMAL 134259 0
5 fscsi2/path5 OPEN NORMAL 134320 0
6 fscsi0/path6 OPEN NORMAL 134628 0
7 fscsi0/path7 OPEN NORMAL 134362 0
8 fscsi1/path8 OPEN NORMAL 134011 0
9 fscsi1/path9 OPEN NORMAL 133283 0
10 fscsi2/path10 OPEN NORMAL 134218 0
11 fscsi2/path11 OPEN NORMAL 134387 0


root@c51 / pcmpath query adapter

Active Adapters :3

Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 13666596 0 96 96
1 fscsi0 NORMAL ACTIVE 13545239 0 96 96
2 fscsi2 NORMAL ACTIVE 13539120 0 96 96

[/tt]

But where do you see wasted IO resources?

--Trifo
 
And what does pcmpath query essmap show in the "Connection" and "Port" columns?

And what is the SAN configuration? One director? Multiple switches? Zoning?


HTH,

p5wizard
 
This is the pcmpath query essmap output (for the above disk)

[tt]
Disk Path P Location adapter LUN SN Connection port
------- ----- - ---------- -------- -------- ----------- ----
hdisk5 path0 07-08-01[FC] fscsi0 11730165 R1-B1-H1-ZA 0
hdisk5 path1 07-08-01[FC] fscsi0 11730165 R1-B3-H1-ZA 80
hdisk5 path2 08-08-01[FC] fscsi1 11730165 R1-B2-H1-ZA 20
hdisk5 path3 08-08-01[FC] fscsi1 11730165 R1-B4-H1-ZA a0
hdisk5 path4 0C-08-01[FC] fscsi2 11730165 R1-B1-H1-ZA 0
hdisk5 path5 0C-08-01[FC] fscsi2 11730165 R1-B3-H1-ZA 80
hdisk5 path6 07-08-01[FC] fscsi0 11730165 R1-B2-H2-ZA 24
hdisk5 path7 07-08-01[FC] fscsi0 11730165 R1-B4-H2-ZA a4
hdisk5 path8 08-08-01[FC] fscsi1 11730165 R1-B3-H2-ZA 84
hdisk5 path9 08-08-01[FC] fscsi1 11730165 R1-B1-H2-ZA 4
hdisk5 path10O 0C-08-01[FC] fscsi2 11730165 R1-B2-H2-ZA 24
hdisk5 path11O 0C-08-01[FC] fscsi2 11730165 R1-B4-H2-ZA a4
[/tt]

We have 2 SAN directors, every server has its own zone containing all his FC adapters and all cards in the ESS. Same zoning configuration is distributed among two switches.

--Trifo
 
1 adapter is connected to first director and finds 4 paths to 4 of the 8 ESS ports

2 adapters are connected to 2nd director and they each find 4 paths to the other 4 ESS ports (8 subtotal for this director)

So total is 12 paths as you stated.

For any server with 3 FC adapters, the paths via 2nd director will do two thirds of the IO work (and the ESS ports in question also carry 2/3 of the load), the other 4 ESS ports only carry 1/3.

If you have more than one server and you alternate the 2-connection/1-connection directors, that should even things out a bit.

Is it better to have less paths active? I don't know... But I do believe there's a limit to the total number of active paths. So you might run into another maximum inthe future.

In any case, your 12 paths per disk are sharing the load evenly: see "Select" column in pcmpath query device, so you should be okay on that.

You can limit the number of paths by restricting the zoning, but that will take some figuring out which ESS port is where, plus it will make it more difficult to maintain...


HTH,

p5wizard
 
From SDD User's Guide:

You can have a maximum of 32 paths per SDD vpath device regardless of the number of LUNs configured. However, configuring more paths than is needed for failover protection might consume too many system resources and degrade system performance. You should use the minimum number of paths necessary to achieve sufficient redundancy in the SAN environment. The recommended number of paths is 2 - 4.
To avoid exceeding the maximum number of paths per SDD vpath device on AIX 5.2 or above, follow the recommendations in Table 12.

Table 12. Recommended maximum paths supported for different number of LUNs on AIX 5.2 or above
Number of LUNs Maximum paths per vpath
1- 600 vpath LUN 16
601 - 900 vpath LUN 8
901 - 1200 vpath LUN* 4

Note: * In order to configure 1200 LUNs, APAR IY49825 is required.

I believe same numbers apply for SDDPCM (max number of paths per hdisk)

HTH,

p5wizard
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top