Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RHEL 4 crashes after installation.

Status
Not open for further replies.

coronel

IS-IT--Management
Dec 6, 2005
82
US
Hi folks, how are you today?
Ok, my situation is the following:
I have an adaptec SATA II RAID 2820SA controller card and 7 hhd have been attached to it, the HDD's have been configured as a RAID 0.
The HDD's reference number is ST3750640AS.
Barracuda 750GB 7200.10 rpm
During the installation process, RHEL can see the raid configuration w/o problems and the raid can be partioned and formatted as normal; when the installation process finishes and the system reboots i receive the following error message:

/bak1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
An error ocurred during the file system check
dropping you to a shell; the system will reboot when you leave the shell.
either the superblock or the partition table is likely to be corrupted.
/bak1 contains a file system with errors, check forced.

Now this is the funny thing; when i use western digital 500gb hdd "WD5000YS" i do not receive any kind of error.

PLEASE CAN YOU GUYS HELP ME WITH THIS !@@#$%#&*^.
THANKS IN ADVANCE.










 
Hmmm....

RAID0 is a striping with no parity scheme. That introduces the risk that small errors on any disk can effectively disable the array. I would first consider whether RAID5 is an option for you to increase data resilience from a sector/disk failure... you seem pretty wise, so this probably too obvious.

It's possible that there is a kernel/module difference between what the installer loads with and what's been profiled for your system. I think if you try booting in recovery mode with your install CD you should be able to see if the array can be recognized and used. If that's the case, then something about the modules/driver/kernel on your install is inconsistent with what's used by the installer CD.

"lsmod" is your friend.

D.E.R. Management - IT Project Management Consulting
 
Uum...stabbing in the dark here, but I believe I read somewhere that it isn't generally a good idea to have all partitions (especially partitions like /boot) located on RAID. Not necessarily because of possible loss of data (as thedaver mentioned), but also because Linux might have some problems addressing those sections of disk upon bootup. Is there any way you can install the base installation on a non-RAID disk and just mount the directories with dynamic data on the RAID disks? For instance you could have /boot, /, and swap on your main drive, and the other partitions like /var and /home on your RAID drives.
 
lazyrunner50.

I probably don't agree with your point about /boot on a RAID partition.. however, that disagreement would be based on the use of hardware raid as a good practice... I don't like software raid much at all.

"technically" hardware raid doesn't look different to linux from a single drive AS LONG AS you do not need a special driver to interact with the disk/array BEFORE you go to read the /boot or MBR from the drive.

This is where we probably agree, ...

If your kernel doesn't have the necessary support for your RAID device built into the kernel, you are probably going to experience some heartache booting from that drive.

D.E.R. Management - IT Project Management Consulting
 
Fair enough. Yeah, I agree with you on the use of software RAID. There's really no point unless you set it up like hardware RAID (each partition on a seperate physical disk). But that begs the question why you wouldn't set up hardware RAID in the first place!

Actually, what I'd do is scrap the RAID 0 setup, and switch to RAID 5 or RAID 10. Though RAID 0 is good at disk access speed, you stand the chance of drive failure (which in the case of 7 hard drives is a distinct possibility) and losing all your data. By switching to "true RAID", you gain redundancy and if you chose the appropriate level (RAID 5 and RAID 10 are good examples), you can gain some speed benefits as well. Additionally, you might want to look at setting up LVM. That way you could dynamically add new drives and resize partitions.
 
I forgot to mention; my linux is installed in a different hdd which is /dev/hda and the completed raid is /dev/sda, the raid is only for storage.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top