Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

EMC FC4500 Blues

Status
Not open for further replies.

hidden75

IS-IT--Management
Aug 10, 2005
169
0
0
US
Hi, My first post. Go easy.

I bought a FC4500 basically from Ebay for educational purposes only. The drives I received in the unit are all 18 GB drives. I believe these are not the orginal drives. The problem I'm having that that I get a FLARE error during boot up. Now some investigation says that I guess since these are not the orginal drives I have no FLARE code.

At this point what is my next step? I called EMC and they are basically not helpful. The unit isn't on warranty or anything.


Can I put FLARE on myself somehow? How would I go about getting the manangement software I need like Navisphere and such?

I guess I'm stuck.


Anybody have any suggestions?

Thank you for your help.

Dave
 
Moved past this issue. Finally got FLARE and FCLI prompt. On to finding Navisphere. Anyone know where I can get Navisphere?

Thanks All.
Dave
 
Navisphere is not easily obtained. It is usually distributed with the array. You might see if the person you bought it from has the cd. The other options is to manage the device thru fcli. Not the nice and easy gui interface of Navi but it can be done.
 
Thanks for the reply Maultier. You have helped me greatly.

I bought it off ebay and the company I bought it from got it from a third party liquidation company. No luck there. I called EMC and the unit isn't under warranty. :(

I'm working on using FCLI.. but its really confusing. I'm so used to Navisphere at my old company. Darn.

 
Well, maybe you can get your hands on NaviCLI. It, again, is not as easy to work with as the gui but it is a little easier to navigate the commands that way.
 
Is NaviCLI just a small version of Navisphere?

What about any other types of software?

Or is this a EMC only software unit?
 
Navisphere CLI is a command line interface that allows you to configure and query the storage array.
I've never investigated the possiblity of other vendors products that may manage the array. HP, Dell and others resell the Clariion and I think they just repackage the Navi software to fit their corporate image.
 
I have setup the LAN access. I can telnet to it. But it only gives me four commands. One of them is setpass. When I enter this command it asks for the old password. Hmm.. Password and Administrator, EMC do not work.. I wonder if you need to go into a privl mode in order to get more access via telnet.


one command is netstat.


The output is:


CLARiiON (spb)

08/10/2005 16:24:12
fcli> help
Notes: command full name/abbreviation - summary

exit/ex - Close a network connection
help/? - list all available commands with summary
netstat/ns - network status and statistics
setlan/sl - configure the lan settings
setpass/pa - set the passwords and their flags

08/10/2005 16:24:13
fcli> netstat

Active Internet connections
Proto Recv-Q Send-Q Local Address Foreign Address (state)
tcp 0 0 192.168.100.120:23 192.168.100.2:2163 ESTABLISHED
tcp 0 0 *:23 *:* LISTEN
tcp 0 0 *:2918 *:* LISTEN

08/10/2005 16:26:12
fcli>


Wonder what port 2918 is for? Probably Navisphere or something.

I'm in the process of building two SPS cables for when the SPS's show up. Thank to you I have the pinout.

Dave
 
I know of no one that has had success trying to use fcli from this port. You could try typing 'debug' to see if some additional commands present but, normally, flci is worked thru the serial port.
 
Ok I'm trying to use the FCLI.. I have two LUNS and two RAID groups I guess from the previous owner.. However since all these disks were removed and all back in the wrong slots and enclosures only one of the disks in the DPE shows UNBOUND.. All the other disks including disk 0, 1 and 2 show removed but bound or something.. I can't figure out how to get the disks as status like they are new.. No raid groups. No LUNs.. When I use the command UNBIND it says something about insufficient database drives.

what I'm afraid of is loosing my FLARE. I don't want to drive back to the place I got the DPE from to get FLARE again.


but I guess all these drives need to be reset or something.. Maybe I need to put FLARE on a Drive and store it in the closet or something just in case I screw something up.


I'm stuck.
 

The database drives hold the clariion configuration information.. raid groups..luns etc.. If you took a drive from another site, you may have their database information.You need to delete the raid broups and the luns. I don't remember the command off the top... chggrp or something.. I'd have to log onto a clariion to jog my memory. Can't do that till tomorrow. You won't lose flare by removing raid groups or luns via the command line. Flare is held in a private area on the disk that is unaffected by raid group or lun changes.
 
If I had the wrong FLARE (ie. from a different model would it work)

I'm not sure what is going on but for some reason I'm getting errors trying to delete the RAID group, unbind the LUN etc. I can't get any of the drives to show using the di -l command at all.

I'm not sure what is going on but something is messed up. I can't seem to clear out the old raid and luns.

I'll do some cut and pasting tomorrow.


Frustrating .. hmm

Dave
 
Ok here is some captures from my current saga. FC4500 trying to get it up and running. Hoping that this may help someone troubleshoot. I wish I had access to powerlink so I could do my own troubleshooting.




FLARE MENU:





FLARE Menu


1) Download 5) Start FLARE
2) Re-init I/O 6) Core load
3) Install FLARE 7) Core search
4) Load FLARE 8) Core erase

0) Exit

Enter Option : 4

Initializing back end FIBRE...
Drive # Rev# CodeType SectorSize Status
-----------------------------------------------------------------
0x0 05.32.05 ALPINE 520 VALID
0x1 05.32.05 ALPINE 520 VALID
0x2 05.32.05 ALPINE 520 VALID

Enter disk selection (0 - 2,CR = EXIT) :




All three of the drives in slot 0 , 1 , 2 have FLARE.


Chip information:



Enter Option : 18

--------======== Chip Revisions ========--------

Processor: MPC7400 Max Chip revision: 2.09 Processor Speed: 350Mhz
Alpine board rev: 3.0
LRU version: 1.9
FE Hawk rev: 0.2
BE Hawk rev: 0.2
PCI bridge chip rev: 0.6
FE Tach Lite rev: 1.2
BE Tach Lite rev: 1.2
PP Tach Lite rev: 1.2
Lan chip rev: 0.8




Here is information regarding the SP.

08/11/2005 08:39:08
fcli> sp
SP A LOOP ID 0x0 (0.)
PROM Revision: 2.09 Microcode Revision: 05.32.05
Statistics Logging: DISABLED PEER SP: PRESENT
Disk Write Caching: DISABLED R3 Write Buffering: DISABLED
WRITE CACHE: DISABLED READ CACHE: ENABLED
RAID OPTIMIZED: Mixed LUNs SP TYPE: ALPINE
LUN REMAPPING: DISABLED
A: DP 00% TOTAL 0000 DIRTY 0000
B: TOTAL 0000
U: DP 00% TOTAL 0000
Requests Complete: 1
SPS A: --
SPS B: --

Press any key to continue....


Press any key to continue....

slot : 0 1 2 3 4 5 6 7 8 9 | PSA PSB FAN
DPE-state: REM REM REM REM REM REM REM REM REM REM | OK OK OK OK
Unit/Group : U00 U00 U00 U00 U00 U00 U00 U00 U00 G63 |

08/11/2005 08:40:33
fcli>



Notice that all the drives show as being removed????
The DPE has 8 drives installed currently.






Next.

08/11/2005 08:40:47
fcli> ls
Logical Unit Summary:

Raid Dflt. Unit
LUN Group Owner Type Capacity Cache State Frus
--- ----- ----- ------ -------- ----- ------- ------------------
0 0 SP-A RAID-5 530.9 GB RW- RDY* 0(DEAD) 1(DEAD) 2(DEAD) 3(
DEAD) 4(DEAD) 5(DEAD) 6(DEAD) 7(DEAD) 8(DEAD)
1 1 SP-B RAID-5 262.0 GB RW- RDY 10(DEAD) 11(DEAD) 12(DEAD)
13(DEAD) 14(DEAD) 15(DEAD) 16(DEAD) 17(DEAD) 18(DEAD)

08/11/2005 08:41:05
fcli>


This is the current LUN information.. I would assume this information came from the 73 GB drive I copied FLARE from. I copied it two my 18 GB drive and then copied FLARE to two other 18 GB drives.
I cannot unbind these drives and kill the LUN's. I get an error..


Error message:




fcli> unbind
Usage: unbind -h
unbind unit_num <options>
unbind -rg # <options> unbind raid group
unbind -a <options> unbind all units
unbind -va_vlu va vlu <options> unbind VLU in VA
unbind -va_flu va flu <options> unbind FLU in VA

unit_num: logical unit to deconfigure. (0,TBD)
va: virtual array index
vlu: virtual lun index
flu: flare lun index
options: -o override-prompting

08/11/2005 08:41:26
fcli> unbind 0
Insufficient database disks available to unbind

08/11/2005 08:41:29
fcli>




Next.


When I try to use the command to zero disks I get this.




08/11/2005 08:41:50
fcli> zd 0_7

08/11/2005 08:41:53
fcli> RB error full 0x50410101
part category 0x5 code 0x5041 error num 0x41 status 0x1 type 0x1
Unknown Opcode 0x8A ERROR: Category 0x5 Num 0x41 Type 0x1

Sunburst Code: HOST_ZERO_DISK_DB_UPDATE_FAILED
Status recv'd opcode 0x8A status 0x50410101


08/11/2005 08:41:55
fcli>



Next:




08/11/2005 08:40:33
fcli> di -l
Fru Vendor Model Rev. Serial no. Capacity
==== ======== ================ ==== ============ =========

08/11/2005 08:40:47
fcli>



This is suppose to show me all the drives installed on the enclosure and other enclosures. but I get no drives showing up. I do have 8 drives installed.


Here is the information on a few disks. 0 thru 3.

fcli> di 0
Performance Statistics for Disk Module 0_0

Raid Group (hex): 0x00
Number of units: 1
L2 Cache capacity: 0 bytes
State: REMOVED BUT BOUND

SCSI Address: ID 20 LUN 00
Unit Number (hex): 0x00
Unit Type: RAID-5 Group (Individual Access Array)
Stripe Element Size: 64 Sectors
Maximum Rebuild Time: 18 Hours
Sense Key (hex) = 06
Error Code (hex) = 29 Qualifier (hex) = 00
Entire Valid Request Sense Information Bytes (hex)
70 00 06 00 00 00 00 0A
00 00 00 00 29 00 03 00
00 00

Press any key to continue...(or 'q' to Quit)
Performance Statistics for Disk Module 0_1

Raid Group (hex): 0x00
Number of units: 1
L2 Cache capacity: 0 bytes
State: REMOVED BUT BOUND

SCSI Address: ID 20 LUN 00
Unit Number (hex): 0x00
Unit Type: RAID-5 Group (Individual Access Array)
Stripe Element Size: 64 Sectors
Maximum Rebuild Time: 18 Hours
Sense Key (hex) = 06
Error Code (hex) = 29 Qualifier (hex) = 00
Entire Valid Request Sense Information Bytes (hex)
70 00 06 00 00 00 00 0A
00 00 00 00 29 00 03 00
00 00

Press any key to continue...(or 'q' to Quit)


Performance Statistics for Disk Module 0_2

Raid Group (hex): 0x00
Number of units: 1
L2 Cache capacity: 0 bytes
State: REMOVED BUT BOUND

SCSI Address: ID 20 LUN 00
Unit Number (hex): 0x00
Unit Type: RAID-5 Group (Individual Access Array)
Stripe Element Size: 64 Sectors
Maximum Rebuild Time: 18 Hours
Sense Key (hex) = 06
Error Code (hex) = 29 Qualifier (hex) = 00
Entire Valid Request Sense Information Bytes (hex)
70 00 06 00 00 00 00 0A
00 00 00 00 29 00 03 00
00 00

Press any key to continue...(or 'q' to Quit)

Performance Statistics for Disk Module 0_3

Raid Group (hex): 0x00
Number of units: 1
L2 Cache capacity: 0 bytes
State: REMOVED BUT BOUND

SCSI Address: ID 20 LUN 00
Unit Number (hex): 0x00
Unit Type: RAID-5 Group (Individual Access Array)
Stripe Element Size: 64 Sectors
Maximum Rebuild Time: 18 Hours
Sense Key (hex) = 06
Error Code (hex) = 29 Qualifier (hex) = 00
Entire Valid Request Sense Information Bytes (hex)
70 00 06 00 00 00 00 0A
00 00 00 00 29 00 03 00
00 00

Press any key to continue...(or 'q' to Quit)


AS you can see each shows REMOVED but bound.




Is this a firmware issue, FLARE issue, Drive issue?



I'm totally lost at this point.



Thanks Dave


 
I've run into this before with someone else. There was a different command used to remove the raid groups.. maybe sg? I'm not sure... do a help and find the command for removing the raid groups. Once the raid groups are gone, unbind the luns. unbinding the raid group doesnt' delete the raid group.I didn't have time today to look into my Alpine (4500).. I'll look Monday if you don't get a resolution.
 
Maultier,

I thought possibly the same thing. Here is all the configuration and even me trying to remove the raid groups 0 and 1. The only two raid groups on here.



----------------------------

fcli> sg -h
setgroup/sg - Create or change the configuration of a RAID GROUP.
Usage: setgroup -h
setgroup raid_group_id <option> <disk-names>

setgroup raid_group_id <option>
<-l> Show raid group layout

setgroup raid_group_id <option> disk-names
<-exp> Start an expansion
<-defrag> Start a defragmentation
<-crg> Create a RAID Group
<-rrg> Remove a RAID Group
<-xl> Increase capacity on expansion
<-r> (0,1,2) Set expansion rate
<-er> (0,1) Remove/Set the explicit removal flag
<-o override-prompting>

08/11/2005 15:45:18
fcli>

fcli> sg 0
SP B LOOP ID 0x1 (1.)
Raid Group (hex): 0x00
State: RAID Group Valid without Explicit Removal
Number of LUNs: 1
LUNs: 0
Unit Type of LUNs: RAID-5 Group (Individual Access Array)
Number of Disks: 9
Disks: 0_0 0_1 0_2 0_3 0_4 0_5 0_6 0_7 0_8

Expansion Rate: 0

Free space: 0x17AB96FF sectors/fru, 1551254 MB total
Largest Contiguous Free space: 0x17AB96FF sectors/fru, 1551254 MB total

08/11/2005 15:45:31
fcli>



08/11/2005 15:45:31
fcli> sg 1
SP B LOOP ID 0x1 (1.)
Raid Group (hex): 0x01
State: RAID Group Valid without Explicit Removal
Number of LUNs: 1
LUNs: 1
Unit Type of LUNs: RAID-5 Group (Individual Access Array)
Number of Disks: 9
Disks: 1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8

Expansion Rate: 0

Free space: 0x1BDF3F7F sectors/fru, 1826623 MB total
Largest Contiguous Free space: 0x1BDF3F7F sectors/fru, 1826623 MB total

08/11/2005 15:45:43
fcli>




fcli> sg 1 -l
Layout of raid group 1:

LUN Size(mb) Logical Address Range Physical Addr. Range (per fru)
--- -------- --------------------- ------------------------------
PRI 286 FRU_PRIVATE SPACE 0x0000000 - 0x008EFFF
1 268240 0x0000000 - 0x20BE83FF 0x008F000 - 0x420C07F

08/11/2005 15:45:57
fcli>




08/11/2005 15:45:57
fcli> sg 0 -l
Layout of raid group 0:

LUN Size(mb) Logical Address Range Physical Addr. Range (per fru)
--- -------- --------------------- ------------------------------
PRI 286 FRU_PRIVATE SPACE 0x0000000 - 0x008EFFF
0 543609 0x0000000 - 0x425BC7FF 0x008F000 - 0x85468FF

08/11/2005 15:46:10
fcli>




fcli> sg 0 -rrg
Configuring RAID Group 0

08/11/2005 15:46:53
fcli> RB error full 0x50230101
part category 0x5 code 0x5023 error num 0x23 status 0x1 type 0x1
Raid Group configuration ERROR: Category 0x5 Num 0x23 Type 0x1

Sunburst Code: HOST_REMOVE_RG_ERROR_VALID_LUNS
Status recv'd opcode 0x8D status 0x50230101





08/11/2005 15:47:09
fcli> sg 1 -rrg
Configuring RAID Group 1

08/11/2005 15:47:12
fcli> RB error full 0x50230101
part category 0x5 code 0x5023 error num 0x23 status 0x1 type 0x1
Raid Group configuration ERROR: Category 0x5 Num 0x23 Type 0x1

Sunburst Code: HOST_REMOVE_RG_ERROR_VALID_LUNS
Status recv'd opcode 0x8D status 0x50230101
---------------------------------


Trying to remove the RAID groups gives me these errors. I have no idea what they mean. Almost as if because its a valid LUN it wont let me remove them. But I cannot remove the LUN's either. That gives me some where error I posted on the previous post.

Dave



 
Ok.. well I have some time this week. I'll hook up to a 4500 tomorrow and figure out what I did the last time this came up. I think I just went thru and unbound 0 thru 9 even though they didn't show up on the list. It may have been the sg -rrg command I used... I just don't remember. I don't typically use fcli to manage the array so, I'll have to experiment a bit and let you know.
 
Today I tried the sg -rrg and that didn't work.


I got another 10 X 18 GB drives in the mail today. I put one of those in slot 9 and did a di -l and I was able to see it as a valid harddrive not bound.

I'm still getting weird errors when trying to break the LUN's or RAID groups.


Also I installed the Navi Agent on the FC HOST box. The FC host isn't connected via fiber yet. But the HOST does have connectivity to the SAN via the ether port. How does the Navi Agent connect to the SAN?

Thanks. I hope I can get this all figured out and I hope nothing is wrong with the SAN DPE.


I just need to kill the LUN's somehow.
 
OK... Going thru the fcli.. here's what I got.
I ran into the same error you did when trying to bind dae disks 1.5 thru 1.9. If I bound 1.5 thru 1.8.. it would work. Binding 1.9 as an individual disk revealed that that disk had a raid group id of 118. Once I removed that raid group, I was able to bind all the disks into the raid 5 group I wanted.
Hope this helps you over the hurdle.
 
Maultier,


That doesn't make any sence to me. I do not have a DAE connected at this time. I'm workingw with the drives in the DPE. All the drives in the DPE were probably in a DAE at one time for this system. I copied FLARE from a 73 GB drive so it copied all the LUN and RAID group information as well. So it shows two LUNS and two RAID GROUPS. I am unable to remove the raid groups, unable to remove the luns. The errors are above.

Thanks Dave
 
First off.. it doesn't matter.. DAE or DPE
I should have left that detail off but the unbind could just as easily been on disk 0-9.
First do an lustat -l

Note which raid groups are there and then unbind the groups.
Ie: assuming a raid group of 0
unbind -rg 0

after unbind all the raid groups you can unbind do another lustat to see what you have.

Then try binding disks in groups of 5.

If it fails to bind... drop one disk and try again.

Use the fcli help commands to figure things out.
The debug -m 1 command will give you a bit more detail on some commands.

Have fun.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top