Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Clariion FC5600C Questions.

Status
Not open for further replies.
Dec 13, 2000
30
US
--

After so many years in this industry, I hate being a newbie at anything, but, I picked up a FC5600 as a storage shelf. Not realizing that it was a whole sight more than just a shelf. What I needed was a dumb shelf w/73G drives that I could simply attach to my Emulex LP8000/Linux and go.. No such luck... It turns out to have a pair of RAID DPE boards, with no docs or suppport software. The drives themselves are formatted with 520 byte sectors, so, they are not not much use in a straight loop.. So... who out there has documentation and enough software to get me set up???

Some info on the unit. It is a Data General branded Clariion and was pretty roughly handled. The lock handles on the LCC cards are broken off and one of the cards is un-removeable. The front fan tray was mashed so that the power connectors won't make contact without putting something through the grate and forcing the connector to seat. There were no cables or terminators or anything else... Just the unit, loaded with drives. Both DPE's come up with a 0x0_7F code and hang..

Essentially, I bought the thing for the drives, and got a good price, but, if I can't use it as an array, it's pretty useless and I should go get my cash back and look for another. Can anyone get me started or should I just go back to the well ???

Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
--

Forgot to add a few things.

One: Since I don't have docs, I don't know if or how this thing is terminated, so.... There are no visible termonators with it.

Two: We may have a bit of fun with getting the host OS to be very useful in helping. It's a CentOS 3.7 system on a Dell 7150 Quad Itanium system. CentOS is the only decent OS, I have managed to get stable on the machine. These are not the McKinley CPU's, so, some of the other OS options are not possible. If something is else is required, I have an Intel SE7501WV2/ISP1300 system I can use as a mule until the array is ready for the Dell.



Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
The storage processors (you refer to them as dpe's) need code to operate. The code is normally loaded from one of the first three drives. The drives may not have the flare code on them at all or it just could be that the unit has more damage than you think. I'd not waste any time with this unit.
 
--

Thanks for the reply.. Yeah.. I sort of knnew that, so , I pulled all the drives and fed them into slot 0 one at a time, until one of them started loading flare code.. I then put drives 2 and 3 in and let the code get copied and then swapped out the 0 drive with one of the appropriate size. So, I am beyond the flare hurdle. The Linux Kernel can see the DGC unit, but, no size reported.. Maybe the unit is OK... What do you think ???

Here is the kernel report..

Emulex LightPulse FC SCSI 7.3.3
Copyright(c) 2003-2005 Emulex. All rights reserved.
PCI: Found IRQ 54 for device 04:01.0
PCI: 04:01.0 PCI cache line size set incorrectly (64 bytes) by BIOS/FW.
PCI: 04:01.0 PCI cache line size corrected to 128.
lpfc0:1303:LKe:Link Up Event x1 received Data: x1 x1 x0 x0
lpfc0:1305:LKe:Link Down Event x2 received Data: x2 x20 x4008
lpfc0:1303:LKe:Link Up Event x3 received Data: x3 x1 x0 x4
scsi2 : Emulex LP8000 1Gb PCI Fibre Channel Adapter on PCI bus 04 device 08 irq 54
blk: queue e00000007f5c7bb0, I/O limit 17592186044415Mb (mask 0xffffffffffffffff)
Vendor: DGC Model: Rev: 0511
Type: Direct-Access ANSI SCSI revision: 04
blk: queue e00000007f5c77b0, I/O limit 17592186044415Mb (mask 0xffffffffffffffff)
Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0




Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
--

OH... forgot to mention... It does respond to FCLI mode...

I am in the process of motoring around in there, but, it is pretty messy... Maybe I can find the "DEFAULT" button and start over..



Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
As much as I am unaccustomed to whining.. (I really hate it in other people)... I am not having any fun here.. Perhaps there is too much old PDP/VAX/etc arcana in my head to figure out what I am doing wrong, but, I am just not getting this. Here is what the fcli> has to say with my issue...

Code:
> bind r5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 -c read -u 1
RB error full 0x50040101
part category 0x5 code 0x5004 error num 0x4 status 0x1 type 0x1
Bind ERROR: Bind Array: Category 0x5 Num 0x4 Type 0x1

Sunburst Code: HOST_BIND_BAD_FRU_CONFIGURATION
Status recv'd opcode 0x81 status 0x50040101
HI_BIND

07/25/2006 20:07:17
fcli> di -l
Fru   Vendor    Model             Rev.  Serial no.    Capacity
====  ========  ================  ====  ============  =========
  0.  SEAGATE   ST173404 CLAR72   3A90  3CE09G01      0x854709A
  1.  SEAGATE   ST173404 CLAR72   3A90  3CE0KZ27      0x854709A
  2.  SEAGATE   ST173404 CLAR72   3A90  3CE0KT68      0x854709A
  3.  SEAGATE   ST173404 CLAR72   3A90  3CE0KLVW      0x854709A
  4.  SEAGATE   ST173404 CLAR72   3A90  3CE0JTZK      0x854709A
  5.  SEAGATE   ST173404 CLAR72   3A90  3CE0KX7M      0x854709A
  6.  SEAGATE   ST173404 CLAR72   3A90  3CE0L39S      0x854709A
  7.  SEAGATE   ST173404 CLAR72   3A90  3CE0KZQ5      0x854709A
  8.  SEAGATE   ST173404 CLAR72   3A90  3CE0TSMW      0x854709A
  9.  SEAGATE   ST173404 CLAR72   DE84  3CE0JG59      0x854709A

07/25/2006 19:06:53
fcli> sethost
Addressing model:                LUN
Target initiated negotiation:    DISABLED
Substitute busy for qfull:       DISABLED
Mode page 8:                     DISABLED
Recovered error log reporting:   DISABLED
Allow non-mirrored cache:        DISABLED
Auto trespass:                   DISABLED
Auto format:                     DISABLED
Raid Optimization:               Mixed LUNs
Raid-3 Write Buffering:          DISABLED
Loop Failover:                   DISABLED
Private Loop Enable:             DISABLED
Discover Mode:                   Auto
Discovery Needed:                NO
Single-Loop Failed:              NO
Fibre-Loop Failed:               NO
The scsi3 option:                ENABLED

Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
--

I found that problem. Using an unbind -a (all) was only doing the unbind on the first group/lun. I had to remove the second and third groups manually. My Dell still can't see the new LUN's, but, since the DPE is building RAID5's and I suspect that unlike PERC's and DAC's, these arrays are not available until they are fully built... Don't know how long it's going to take with 10 73G drives, but, well patience can be a virtue... (grumble...)...

I just hate waiting hours to discover if I was right or missed something else...

Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
It looks as though you've been reading some of the other posts and are getting things under control. Thanks for the running commentary. You aren't wasting time with the bind. Once that's completed the Clariion is essentially ready to roll. The only other time I've run across a lun sizing issue had to do with the Clariion type setting. I believe it is normally set to 3 but for an IBM machine, I had to change it to 2 I think. It was a lot of years ago so memory may not be serving me well. If the type isn't the issue then it'll be an os issue. Since I'm not that savvy on linux, I'll leave that to others.. perhaps in a linux forum?
 
--

Well, that part works. Still have a couple of issues to discuss, but, things are certainly better than they were.

I could not get the 8+1 RAID5 recognized by my Linux machine. I am not sure why, but, I don't think it's the Linux that's at fault. (A customer of mine has a Clariion Cx200?? with Enterprise Linux and he runs multi terrabyte images. It's pretty much the same driver as CentOS uses (CentOS being built from RedHat EL sources). So, as a result, I am using 2, 4+1 RAID5's at about 1/4TB each. After I move some data around and rebuild an older array, I will try the 8+1 again..

I am not happy with the transfer rate. This is a bit more complicated than I wanted, but, the Clariion does not have an SPS unit, so, it disables write caching. There does not seem to be an override switch. I would like to override since the unit is connected to it's own APC Matrix 3000 and that should keep it from dropping cache at inopportune times. Well, I did manage to find a small SPS emulator package. I will need to run it on some computer that can converse with the DPE's through the SPS ports and keep track of the UPS status. I will likely end up connecting these ports through a XYPLEX terminal server and doing the monitor work on the foundation server. (The main FCAL attached system). I am hoping that will bring up the speed of the unit. I am really shocked when I type a df command and it just sits there for 30 or 40 seconds while the array is doing uncached writes.. I guess I am just used to parallel connected disks...

Thanks for the replies and efforts to this point. I am now having a little fun..

Reply-to: netwraith@pcrd.net
thenetwraith (There is a picture here, but, you just can't see it!)
 
Get the SPS attached and the performance will
pick up 15-20%. They are available reconditioned
with the cables for about $ 200.00. Over-riding it
by getting the status to the SP SPS connection may be
more trouble than its worth.

If you have it RAIDed up correctly.
At the FLCI prompt use the SP command.
The drives should say BND. If they say RDY
The RAID group is un-owned and that may be your
issue with not seeing the LUN.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top