Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Flare Install question on fc5600 1

Status
Not open for further replies.

POPKORN

Technical User
Jan 10, 2005
95
0
0
US
Hello guys,

I have a an fc5600 which in the process of movinf the machine apparently it passed by some sort of magnetic field and all data is corrupted on all drives. How ever I have a friend that works for a company that provides support for this kind of products and he provided me with a bin file which is the flare code for my prom revision.

My question is the following.

How do I get the flare on the disk so that I can boot up the system? I think he mentioned something baout dd but I am not sure.

Anyone has any suggestions?

Thanks in advance
 
From what I can see in the configuration manual, Raid 3 can be mixed but it isn't recommended.. Per the manual:

We do not recommend using RAID 3 in the same storage-system chassis with RAID 5 or RAID 1/0.

and this statement:

Each RAID 3 group requires some dedicated SP memory (6 Mbytes recommended per group). This memory is allocated when you create the group and becomes unavailable for storage-system caching. For top performance, we suggest that you do not use RAID 3 groups with RAID 5, RAID 1/0, or RAID 0 groups, since SP resources (including memory) are best devoted to the RAID 3 groups. RAID 1 mirrored pairs and individual units require less SP attention and therefore work well with RAID 3 group.
 
EMC only recommends the use of RAID 3 with Flare code 14 and above and only when used in a "Backup to Disk" configuration.
 
Thanks for the input, Comtec, but, Popkorn is dealing with an FC5600. Release 14 applies to later generation Clariions.
 
Update.....


I was able to get all the disks online it will take some time after the format to verify for bad sectors and such. Some of these are so big it will take days. like the 400G raid 5 volume. That one for sure will take a long time to verify. Anyway, Now that I have tested everything in W2k its time to let a real OS do its magic. I have mounted some of it on Solaris 9 already and it seems to be working very well.

# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s7 16G 1.9G 14G 13% /
/proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
fd 0K 0K 0K 0% /dev/fd
swap 1.7G 40K 1.7G 1% /var/run
swap 1.7G 0K 1.7G 0% /tmp
/dev/dsk/c0t1d0s0 8.3G 89M 8.2G 2% /export/home
/dev/dsk/c2t0d0s2 49G 50M 48G 1% /raid0
/dev/dsk/c2t0d1s2 81G 64M 81G 1% /raid5-80
/dev/dsk/c2t0d3s2 147G 64M 145G 1% /raid5-150


I think that issues I was having with navi had also to do with windows but, I still have to configure navi on this box which I will try to do tomorrow. Here is some info on it, let me know if it look right to you.

# netstat -i
Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue
lo0 8232 loopback localhost 10193 0 10193 0 0 0
hme0 1500 XXXXXX XXXX 891063 0 617530 0 0 0
lpfc0 65280 192.168.10.0 192.168.10.50 0 0 10 0 0 0



XXX is sub for name as I dont want my hostname in here.

# ifconfig lpfc0
lpfc0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 65280 index 3
inet 192.168.10.50 netmask ffffff00 broadcast 192.168.10.255
ether x:x:xx:xx:xx:xx

So I should be able to install everything as far as navi on this machine and it should work as it now has an ip on the FC controller right?

Thanks

POPKORN



 
Now that I have tested everything in W2k its time to let a real OS do its magic.

heheheheheh (LOL) I really like this phrase :eek:D

# ifconfig lpfc0

Do you have IP over Fiber Channel? why you need that?

Cheers.

 
I do have ip over fiber. The reason for that is that if I wish to later place the fc5600 on a switch I can have other servers share the storage system. as of right now I dont need that function of it as I plan to start using it over samba. I would really need a very good reason to go the other route as the machine running samba has some serious power to it.

POPKORN

P.S> Everyone in IT except MCSE's know what windows is really about.

cheers m8
 
The reason for that is that if I wish to later place the fc5600 on a switch I can have other servers share the storage system.

but you don't need to have IP over FC in order to share the storage (I'm guessing you are talking about SAN), you just need the Fabric protocol (just change from FC-AL to FC-SW). Or are you talking about share the space using samba through out the Solaris server using samba? (I mean, served storage, as NAS for example).

As I remember, FC5600 is a FC storage, not NAS.
 
You are so right about that, it is FC and not NAS GRRRRRR!!!!

thanks for the clarification I almost made a huge mistake. I already have it working in samba over network and everything is functional but, I really dont know what kind of performace should I get from this system. On the raid 0 which has 3 drives it takes 13 minutes to transfer 6GB of data from a windows box over to the Clariion by way of samba. I am just curious if it can be the windows hard driver performance or if there is anything else I can do to improve performance. I would like to find out how can I tests network performance on this setup.

I have tried the socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 but this made it even slower. From 13 minutes to 21 minutes. If I just use the TCP_NODELAY by itself which is default its 13 minutes for the 6GB transfer.

Can someone please point me into the right direction as far as what would I need to do or read in order to test performance in the clariion.

The same 6GB transfer from the windows box over to the raid5 takes about 20 minutes which would make scence since raid 0 would be a lot faster.

In the other hand, The same 6GB file from the raid5 transfering over to the Raid0 on the same clariion system took 25 to 26 minutes. How can that be possible. Raid5 on read should be faster than the transfer from the windows box over to the raid0.

I would like to determine of performance is being affected by samba. I mean the server that samba is running on would never be in question, here are the specs.


# /usr/platform/`uname -i`/sbin/prtdiag -v
System Configuration: Sun Microsystems sun4u Sun Ultra 80 UPA/PCI (4 X UltraSPARC-II 448MHz)
System clock frequency: 112 MHz
Memory size: 4096 Megabytes

========================= CPUs =========================

Run Ecache CPU CPU
Brd CPU Module MHz MB Impl. Mask
--- --- ------- ----- ------ ------ ----
0 0 0 448 4.0 US-II 2.0
0 1 1 448 4.0 US-II 2.0
0 2 2 448 4.0 US-II 2.0
0 3 3 448 4.0 US-II 2.0


========================= IO Cards =========================

Bus Freq
Brd Type MHz Slot Name Model
--- ---- ---- ---------- ---------------------------- --------------------
0 PCI 33 On-Board network-SUNW,hme
0 PCI 33 On-Board scsi-glm/disk (block) Symbios,53C875
0 PCI 33 On-Board scsi-glm/disk (block) Symbios,53C875
0 PCI 33 pcia slot 1 lpfc-pci10df,f700/sd (block) LP7000
0 UPA 112 30 AFB, Double Buffered SUNW,540-3623

No failures found in System
===========================

========================= HW Revisions =========================

ASIC Revisions:
---------------
PCI: pci Rev 4
PCI: pci Rev 4
Cheerio: ebus Rev 1

AFB Hardware Configuration:
-----------------------------------
Board rev: 0
FBC version: 0x101df06d
DAC: Brooktree 9070, version 1
3DRAM: Mitsubishi 130a, version 1

System PROM revisions:
----------------------
OBP 3.23.0 1999/06/30 13:53 POST 1.2.7 1999/05/24 17:33


So the machine is not questionable. My guess would be samba configuration, the way i configured the arrays, maybe I should of used both SPS instead of just using B.


I tried to provide as much info as possible, hell, maybe its the cheap emulex 7000E card its running on. I really dont know, I am just speculating. Someone please point me to the right direction or help me in any way would be nice.

Thanks in advance.

POPKORN

 
Let me understand your infraestructure. You have an FC clariion array connected to the SUN server via 1 FC channel. Yue set samba in order to "share" space with a Windows server, and you use that space through LAN, richt?

LAN
|---------------|
+-----+ +-----+
| SUN | | Win |
+-----+ +-----+
|
| FC
+--------+
|Clariion|
+--------+

is this your logical/physical diagram?

Ok. FC runs at 100MB/s. is your LAN to 100Mbps or 1 Gbps? I will guess a standar LAN running to 100Mbps. Now, 100Mbps = 12,5 MB/s. Reduce TCP/IP protocol overhead, aprox. 30%.. hmmm... may be the best LAN throughput 8 MB/s?

If this is your configuration, your problem is samba running through the LAN!!

did you make a clariion to clariion copy using the SUN server? you told me that you made a disk to disk copy using the windows box, that actually read files using LAN, put it in the windows cache and write again to the LAN. With windows, you are using the LAN, not the FC.

In the SUN server you can use "iostat" and "sar -d" when you are making copies in order to see the I/O stats.

Cheers.
 
BTW! it is a good idea to distribute the load between the 2 SP (note I said "distribute" and NOT "load balance"). Unfortunatelly, you can't load balance using the 2 SP since ATF software only support failover. PowerPath gives you failover and load balancing, but I'm not sure that powerpath supports FC5600.

hmmm... but, may be... volume manager (veritas) has the DMP feature which support load balancing and failover (DMP=Dynamic MultiPath, it is enabled by default in VxVM).

Cheers.
 
You are right, the problem is that I am running samba on 100mbit link. I understand that FC is guaranteed delivery of data and I was not counting on the TCP ip overhead. My guess is that I have to move the SUN server from the 100mbit lan and place it on the 1G lan. That way more machines can access the server at more or less the same speed withou having a significal impact on the network.

The problem was having it on the 100mbit network. Once I move this to the Gige network I should see better performance. Even though each server will be limited to their 100mbit connection it will be a lot better than having this machine on the 100mbit.

There is no cheap way to do this on GIGe ;-)

I guess I can get a GIGe workgroup switch and place the server there. Then place additional GIGe Ether on the few machines that actually need the speed and one link to the cisco switch were it goes to all other users.

What do you think?

LAN
40+ stations
|---------------|
|
GIGE SWITCH |
|------------------------------------|
| | | | |
|1GE |1GE |1GE |1GE |1GE
+-----+ +-----+ +----+ +----+ +-----+
| SUN | | Win | |SUN | |SUN | |Linux|
+-----+ +-----+ +----+ +----+ +-----+
SAMBA IIS HELIX DEV Personal
ORACLE MSSQL MEDIA BOX PartyBox
APACHE MEDIA ENCODERS
| ENCODERS
|
| FC
+--------+
|Clariion|
+--------+
120 x 18G


I think this would improve transfers back and forth on media servers through samba and it would improve latency to users on the 100mbit as it wont be as congested by having the Clariion on the 100mbit.

what do you think?

POPKORN
 
It sounds good. Another point of view is to have all servers in a Gbps private LAN and all users in another network running in a 100Mbps public LAN, so you ensure a good bandwith to the servers and an aceptable bandwith to users... anyway, FC will be faster than 1 Gbps + 100Mbps LAN.

40+ users servers
+----------+ +----------+
| 100Mbps | 1Gpbs
| |
+-----------+--------+----------+
| 100Mbps Sw| | 1Gbps Sw |
+-----------+ +----------+
|100Mbps |1Gbps
+------+ +-------+
| |
+----------------+
|SUN server+samba|
+----------------+
| | both 1GB/s
+----------------+
| CLARiiON FC5600|
+----------------+

A link between the 100Mbps and 1Gbps switches should be a good idea in order to get a good door between users and server without affecting to the SUN server.

Hope this helps.
 
So if I understand you correctly, Sun server with the attached FC should have two 1Gbps network interface cards. One would be connected to Gbps switch and all servers would connect though there. Meaning all servers need Gbps cards as well and then the second ethernet card would be connected to a 1gbps module on a managed switch that would go out to 48 100mbit ports for users?

If thats what you are saying then yes, Thats a hell of a plan and I think its a go on my side if I can find an affordable 1gbps managed 8port or 12 port switch.

I already have the 48 port managed layer 3 switch. Just a matter of finding a reliable 8 or 12 port G managed switch.

I was looking into Cisco but GGGGGGGGGGG, they are wicked expensive. I am looking into ebay to see what I find. If you know were I can get lucky let me know.

Much appreciated.







MAULTIER, I could never ever, ever, ever, ever, ever...... Thank you enough for all your advise and all the help you provided me. If you would of not helped me, I would of been a lot harder for me to do this. I am very greatfull to everyone that got involved in this problem SPECIALY you MAULTIER. I can never thank you enough. If any of you guys that helped me need anything from EMC, Clariion, Hardware of Software let me know. I work for a huge reseller, let me know if there is anything I can do for you.


Thank you guys

&

Thanks to Tek-Tips.com

This place is not a joke, I plan on returning the favor 10 fold as I got help by all means I will help others.

I will become a tek-tips camper :)

POPKORN
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top