Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Newer, Larger Hard Drives having More Issues? - Opinions/References?

Status
Not open for further replies.

kjv1611

New member
Jul 9, 2003
10,758
0
0
US
I've got a handful 1TB hard drives, and I've had a few that either arrived DOA or else have had issues over time - haven't taken the time to send them back for warranty repair/replacement, and better soon before time runs out! [shocked]

So, anyway, I've been looking now at the newer 2TB hard drives from WD, Samsung, Seagate, etc.. Well, I think I'd like to get a couple of the Samsung F4 drives or possibly the WD Green drives, but reading reviews from folks who have bought them makes me a little more hesitant at the moment.

It looks like a LOT of people are having drives delivered DOA of late for various reasons... and of those not DOA, many of them seem to die or have serious issues within weeks or months.

Is anybody else seeing the same thing? Any ideas/thoughts on the matter as to why it might be the case?

It makes me wonder - have we pushed the limit of spindle/mechanical based hard drives too far? Should we stick to smaller drives to avoid data loss... until SSDs can get larger and cheap enough to make the larger sizes a viable option?
 
Never mind if hard drives fail in the short run (infant mortality due to manufacturing defects or bad firmware). I'm more worried about drives that last only two years or 2.5 years. The problem is that you'll never know whether any particular model of drive has a short life ahead of it OR if the particular one you bought is going to have a short life.

It's sort of like cars - you don't know how a new model is going to perform in the long haul until quite a number of them have a 100,000 miles on them and the reliability is reported/analyzed.

Bottom line: BACKUP saves the day and removes worry.
 
After reading that and given what my first comment said, it would be SMART to wait until a drive has been for sale for a number of years (assuming you get one before it goes off the market). That way, you would have some idea that that model wasn't prone to early death.

But people always want the latest, greatest and biggest and that puts you on the painful part of the learning curve. I never need much storage so I can always be 3 years behind on drive size and technology. But, you are still a potential victim of a bad batch of an otherwise very reliable hard drive brand/model (people smoking in the assembly room or sanding the walls before painting or bad firmware revision).

So, you just never know.
 
well I have 2 WD 1 TB drives in my nas and have a seagate 750gb in my system that actually failed in a raid, but I just blew away the partition and did a fresh install, and these drives have been running for over a year, no issues with them at the moment,knock on wood. And a 1 TB samsung I just got to update the HTPC because I needed something to catch the 4 streams from a ceton infinitv4 when it comes in, and I can dump the comcast DVR. The seagate I suspect failed in the raid, as it is pretty noisy, but it runs diags clean and has never given me an issue in this box.

As a side note, I find a lot of SATA drives that for some reason or another seem to fail in a server environment, when pulled from the drive cage, and hooked up to a PC directly, and not through a hot swap backplane, actually work without issue, take that as you will, but I find a lot of code issues between the raid controller,hdd, and the backplane contribute to supposed failed drives, that actually don't fail diags in a single drive setup hooked up to a pc with sata cables.
 
Wow - a hard drive is lasting over a year. Now we're getting somewhere.
 
Well, I've only had them that long, so I don't have anymore data to supply on that, but working on other machines, no I don't see any more failures with large drives than I did with smaller drives from several years ago, is that a better statement? :)
 
Right - just saying we can't be excited until drives last 5 years without it being a miracle.
 
I guess I'll just wait 'till '2015 then, and pick up a couple of the 2TB drives then.... on eBay... used... [wink]
 
5 years!!!

I just put together a NAS with seven 2 TB Hitachi drives (RAID 5), for a firm due to delivery this week... the company that is getting that NAS, out-processes hardware after about three years, so knock on wood that the drives will hold up that long...

but I agree, it may be too early to truly tell what how and the whys for them...



Ben
"If it works don't fix it! If it doesn't use a sledgehammer..."
How to ask a question, when posting them to a professional forum.
Only ask questions with yes/no answers if you want "yes" or "no"
 
My experience has been that many drives I've seen in computers that are 5+ years are still 100% fine. They are the 60 - 120GB drives of their time period. Of course I don't have any idea if these drives have been on 24/7 or used 3 hours per week. But I see them all the time and they're still percolating.

I wasn't kidding about 5 years - seems like a reasonable amount of time to me.

I doubt many companies are that faithful about replacing hardware after a certain period of time. I usually see the concept of "if it's still working, don't even think about it. If it breaks, get it fixed, but don't think about replacing the whole thing.
 
For comparison with current drives - I'm finally going to retire an IBM Netfinity server later this year, installed in Feb 2001, with 30 SCSI drives, 2 17GB in RAID 1, 11 17GB in RAID 5, and 14 70GB in RAID 5.
I can't tell install dates from the RAIDServe manager, but I think we may have replaced 8 or 9 of the drives out of the original 30. There was a cluster of replacements a couple of years ago, demonstrating the concept of Mean Time to Failure. (figuring 8760 hours/year, that's a cluster of failures around the 60,000 hour point)
Our overall setup is migrating to VMWare and SANs, and as I get copied on the maintenance changes, I'd say that we have a drive replacement in the SAN once a month, on average - but there are a bunch of spindles in the SAN, and they're all the newer technologies.

Fred Wagner

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top