When you say "SAN", what are you referring to? It sounds to me you're talking about the storage system which also connected to the SAN. This is an area of confusion to those who are just starting out, but a SAN is the whole network, and on this network nodes attach. The nodes can be servers, and storage (disk, tape) systems. The actual network transport mechanism is Fibre Channel, which the switches handle.
The WWN is fixed on the host bus adapter (the PCI card in the server). It cannot be changed (easily at least). It's like a MAC address on a regular NIC.
Usually, access to disks from the storage system (what you probably call the SAN) is regulated by what is commonly known as LUN masking. In HP terminology, it's called Selective Storage Presentation. This is set on the storage system, essentially logical volumes are assigned to WWNs, thus enabling access to the LUN(s).
Since this is done on the storage system, and the only thing that has changed is the server, I don't think it's an SSP issue. I also don't think it's got anything to do with the zoning in the switches. Zoning would not be affected by a server rebuild.
Since you say you can see the "SAN" in Compaq Array Manager, I will assume you have an MSA1000 storage system. It's the only storage system that can be managed that way. In any case, I'd check what driver you're using for the host bus adapter (under SCSI adapters). Do not use what Windows uses by default, check the other server for details.
Also, you mention this is a cluster. You might very well be in a situation where the remaining server has claimed the disks for itself. This is how clustering in Windows (and most other OSes) work. It's called a SCSI reservation. This is most likely the case, and if so you will have trouble seeing the disks from the newly rebuilt server. Have you tried adding the new server as a new node to the cluster?
Have you evicted the old server? Obviously it's the same physical box, but Windows clustering doesn't know that.
In Windows Disk Manager, do you see disk volumes as "missing, unreadable" or anything like that? That would suggest the server is seeing something, but since the other server is keeping the disks locked, it can't access them.
This is how it looks like after a failover in the cluster, but since this is a brand new build, you might not even see the disks at all in Disk Manager since the server has never "seen" the disks.
My course of action would be to:
a) Check the driver on the host bus adapter
b) Check the SSP on the storage system to be sure
c) Try to readd the node in the cluster
Hope that helps
/charles