Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

AIX LUN issue

Status
Not open for further replies.

nyck

Technical User
Mar 10, 2004
447
0
0
GB
I have a very strange issue with creating LUNS on this AIX server!

Output below:-

powermt display dev=all

Pseudo name=hdiskpower1

CLARiiON ID=CK200043600821 [LONIBM2]

Logical device ID=600601605A101200A882025D6C63DB11 [LUN 10]

state=alive; policy=BasicFailover; priority=0; queued-IOs=0

Owner: default=SP B, current=SP B

==============================================================================

---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk5 SP B1 active alive 0 0



Pseudo name=hdiskpower0

CLARiiON ID=CK200043600821 [LONIBM2]

Logical device ID=600601605A101200CA2376BC9A62DB11 [LUN 5]

state=alive; policy=BasicFailover; priority=0; queued-IOs=0

Owner: default=SP B, current=SP B

==============================================================================

---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk2 SP A1 active alive 0 0

0 fscsi0 hdisk3 SP B1 active alive 0 0



The first LUN is hdiskpower0 which you can see is fine, but when I create the second LUN (hdiskpower1) I only get the one path!!!!



I have tried this about 5 times and always get the same output, any ideas?



As soon as I have created the LUN it always fails over. The hdiskpower1 LUN should be on SPA but as soon as it gets allocated on the AIX server it fails over to SP B!



 
1st did you tried powermt config to include all disk in your config ?



2nd List all defined disks with:
lsdev -Cc disk | grep EMC

You should see 4 different hdisks

If less retry cfgmgr


Also report us powerpath version with :powermt version
 
below is what i have just tried:-

powermt config
Warning: all licenses for storage systems support are missing or expired.
# powermt display dev=all
Pseudo name=hdiskpower1
CLARiiON ID=CK200043600821 [LONIBM2]
Logical device ID=600601605A101200A4B674342964DB11 [LUN 10]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP B
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 fscsi0 hdisk5 SP B1 active alive 0 0

Pseudo name=hdiskpower0
CLARiiON ID=CK200043600821 [LONIBM2]
Logical device ID=600601605A101200CA2376BC9A62DB11 [LUN 5]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 fscsi0 hdisk2 SP A1 active alive 0 0
0 fscsi0 hdisk3 SP B1 active alive 0 0



# lsdev -Cc disk |grep EMC
hdisk2 Available 00-08-01 EMC CLARiiON FCP RAID 5 Disk
hdisk3 Available 00-08-01 EMC CLARiiON FCP RAID 5 Disk
hdisk4 Available 00-08-01 EMC CLARiiON FCP RAID 5 Disk
hdisk5 Available 00-08-01 EMC CLARiiON FCP RAID 5 Disk
# powermt version
EMC powermt for PowerPath (c) Version 4.5.2 (build 4)
 
you can try that:

rmdev -dl hdisk4

cfgmgr

powermt config

powermt display dev=all to look for new hdisk included in hdiskpower1
 
I have just tried the rmdev command and I got the following output:-

rmdev -dl hdisk4
Method error (/usr/lib/methods/ucfgdevice):
0514-062 Cannot perform the requested function because the
specified device is busy.

I'm going to log this issue with our SAN support people and see what they come back with. The strange thing is that I was able add one LUN before I started having issues and that worked perfectly fine!
 
you disk looks like used by another tool like MPIO...

You may also had a problem during hdiskpower creation, may try to remove the last created hdiskpower

powermt remove dev=hdiskpower1

rmdev -dl hdisk4

 
I have started this whole process from scratch again, the first LUN that I have allocated to my server is fine but when i allocate the second LUN I'm only getting the one path but at least its not failing over. Any other ideas as to what is going wrong here?

powermt display dev=all
Pseudo name=hdiskpower1
CLARiiON ID=CK200043600821 [LONIBM2]
Logical device ID=600601605A1012002694E60CD464DB11 [LUN 10]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 fscsi0 hdisk5 SP B1 active alive 0 0

Pseudo name=hdiskpower0
CLARiiON ID=CK200043600821 [LONIBM2]
Logical device ID=600601605A101200B6F6858CD164DB11 [LUN 5]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
0 fscsi0 hdisk2 SP A1 active alive 0 0
0 fscsi0 hdisk3 SP B1 active alive 0 0
 
if navisphere client was installed

give us the output the hba information section of this command:


/usr/lpp/NAVICLI/navicli -h ip_adress_of_the_cx getall
 
hi ,
just a thought could it be a license issue
" Warning: all licenses for storage systems support are missing or expired "
only allows to create one LUN on one path , if LUNS > 1

secondly , have your tried creating another LUN leaving
LUNS 10,5 , does the same thing happen ?
 
It looks like the version of powerpath we were running ( 4.5.2 ) had a major bug, see below:-

229043 A single LUN configured to an AIX host is not configured correctly
by PowerPath and does not load balance I/O. Both the AIX lquerypv command
and the EMC inq command can only access the LUN through one path.

So we removed powerpath and installed 4.5.3 and then created a few luns and all worked great!

cheers for all the help on this rather annoying issue!
 
Good, you have lot of chance.

I'm trying a migration from 4.4.2 to 4.5.3 and im' still blocked in the fisrt par of install for few days waiting support to solve my problem.
 
Part of the issue I see here has to do with how your connected to the CX -
A1 & B1 are (or should be) on the same fabric.

You should be connected to a combination like:
A1 / B0
A0 / B1
A2 / B3

Versions could be an issue, however there are several critical AIX patches recently identified - dealing with "fast fail" you should look into.

I would also look at "ODM".
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top