The entries for the drives still need to exist. The method I find is most successful is to down both nodes, bring up the new node, verify the drives in disk administrator [do not write new signatures, click no].
On a cluster node, the clusdisk driver controls access to shared disks by issuing a SCSI Reserve command. Given Node A is the only node of a single node cluster, it will have reserved the shared drives. When Node B, which has not yet joined the cluster, is brought on line it will not be able to access disks on the shared SCSI bus. It never gets a chance to write the registry entries for the disks.
The custer hive contains entries for each cluster disk resources, not the information for the physical disks on which the cluster disk resource depends. A reference to the physical disk, the signature, is recorded. That's why it's important not to write a new signature.
The entries for the physical disks are created when the server first views the disk. After presentation of a new SAN LUN this typically requires a rescan, and may even require a reboot depending on the HBA version and driver. That's why I find it useful to bring up the new node without the reservation on the shared disks. If you're rebooting because your HBA driver requires it, then you've already done the first part.
While you're there, you might as well verify that all the disks are presented properly and visible from the OS. Also make sure the the drive numbers and letters match between nodes. You can do this by opening disk administrator.
If you do open disk administrator to verify the disks, do not write a new signature. The signature is stored on the other node, and in the cluster hive. If the signature changes, the cluster disk resource will fail to come online, and you'll get a 1034 event. It's not the end of the world, but it will involve some rework - changing the signature - to get things working again.