Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Future of Unix because of Linux 14

Status
Not open for further replies.

kHz

MIS
Dec 6, 2004
1,359
0
0
US
Unix has been around for over 30 years and runs more mission-critical and high-availability servers than any other OS (though some will argue Mainframes).

There are many Unix variants, but the most successful commercial are: AIX (IBM), Solaris (Sun), and HP-UX (HP). There are also open source variants that are successful, namely: FreeBSD and Linux.

Linux was first developed in the early 90's and it has taken a decade for it to reach into data centers.

Why are the tentacles of Linux becoming so far-reaching? I would like to see hard numbers because people mostly tout Linux is free, therefore we will save on the bottom line. Is this argument true? Most large corporations that use Red Hat ES or SuSE (SLES) pay a lot to Red Hat and/or Novell for software support. Then the company has to pay for hardware maintenance for their Dell- or HP-x86 based servers. Does this really save a company money? If you purchase hardware from IBM or Sun, you don't have to pay for the OS, and I am sure the HW/OS support contracts are not significantly less than a combined Red Hat/Dell or SLES/HP contracts.

Another thing I despair about is what is happening to Unix. I really like working on very large scalable, parallel machines like the old IBM SP2 complexes and like working on the old IBM pSeries p670/p690 servers that have LPARs. And I like working on Sun E6900s and even the midrange enterprise Sun E2900s. But they all seem to be going away and being replaced by Linux on x86 HW.

I think Linux is fine for some applications in a business, but I don't think it is the only solution. I work for a very large corporation and Solaris is on its way out, replaced by Linux, and HP-UX is not going to be purchased any longer but is giving way to Linux, and AIX is running databases and will have some growth, but most future growth is going to be Linux.

This is not a bashing of Linux and I won't get into a This Unix vs That Unix tit-for-tat. What I want to know is why the pushing of Linux for buniesses? As stated earlier, I don't believe it is significantly less in terms of savings than IBM or Sun.

And AIX is very stable and durable. It has taken on more of IBMs Mainframe technology and will be incorporating more of that technology in AIX v6 when it is released. Linux doesn't have behind it what IBM and Sun and HP have put into their versions of Unix over the last 15-20 years.

Plus x86-based hardware isn't anything like the hardware of a p690 or Sun E6900. I don't believe it has the redundancy or HA quality that the high-end servers of HP, Sun, and IBM have.

I don't think Unix is going anywhere in the next 20 years, because Windows and Unix make up the greatest majority of installed OS's. And even if one debuted, it took Linux 10 years to being getting into data centers, so it would take that long for a new OS to make in-roads, and that would be after lengthy development. Microsoft keeps delaying the release of Vista and that isn't a completely brand new OS. So I think Unix is safe for 20 years (or more).

But what am I going to be relegated to? Linux on cheap Intel hardware? Plus I also don't really like the fact that everyone out there sells themselves as knowing Unix because they use Linux at home on a cobbled-together PC. I have put in over ten years of learning the intracacies of AIX and RS/6000 and pSeries hardware and Solaris and Sun Fire hardware, and I find it difficult to classify someone who has toyed with Linux on a PC at home sell themselves as a Unix professional.
 
@daFranze: OpenOffice isn't a part of the operating system, and is available for Windows too, afaik.

@all: You got a bit off-topic long ago.
Open a thread with an appropriate topic to continue your discussion please.

btw: In an insurance-company I worked for, the clients where running windows, the Informix-Database was running on BSD, and File- and Mailserver on Linux.
It's long ago, and the first message of Informix running on Linux appeared a few years later.

I guess there is some movement from Unix to Linux.
Why did oracle start supporting linux?
Why is Suns Java, which is mainly run on servers, supporting Linux?

Intel-standard-hardware with replication might be often cheaper, than otherwise more reliable hardware.

seeking a job as java-programmer in Berlin:
 
Intel hardware is cheaper if you want the low-end variety. But with Linux the sweet spot for performance is 2-4 cpu's even though it can scale to 32 cpu's. Therefore, if an app needs more processing power, then I don't believe Linux is the way to go. And I believe 32-bit Linux has some kind of fix that allows access to more than 4GB memory, but you suffer on performance.

Best OS for the job; but I still don't understand why the huge push to Linux. I still don't truly believe the cost is that much less when you figure in support contracts and licensing for enterprise versions. Maybe somebody has numbers on that.
 
I've always said it's the carpenter not the toolbox. I can buy a Rolls Royce and drive it into a tree. I would be interested to know how linux isolates application processes from one another, can a poorly written app bring down the whole server by consuming all the resources? In 2003 there are isolated app pools to prevent this from happening, this has saved us more than once from bad code.
[censored][censored][censored]

If more than 1 goose are geese, why aren't more than 1 moose meese??
 
Note: I have never worked on a mainframe

Note aside, I have never worked on a system that couldn't be brought to it's knees by poor code.

Incredibly useless set of facts:
Our company only has two linux servers, as we are 99% a windows shop. One runs a ticket tracking system and a few monitoring apps, the other is only an FTP server that receives EDI transactions from one of our biggest customers and serves as the back-end data store for our sales force software (requires 14 hour perfect uptime every weekday or we get chargebacks, though we've never received this one, so I couldn't tell you how much). The last time they went out was during a planned corporate power outage when we purposely disconnected the generator so we could rearrange the server room. Most of our windows servers have rebooted since then, either due to critical updates to the OS or systems that run on them.

We have one windows 2003 system that is so good at managing memory and seperate pools that a legacy app that we are still trying to get rid of manages to lock up IIS in an avg of 8-10 hours after startup. Right now the solution is a scheduled job to restart IIS every 6 hours.

Like I said, we are a windows house. When systems weer standardized we chose to stick with Windows due to the investment we already had in that area. I code/script in C# and VBScript.
When I need to convert 3000 images from CMYK JPG's to RGB JPG's, my preference is to write a 30 second script on my linux machine in Bash.
When I need to update an unknown number of records in the Costing database (MS SQL) I toss it into VBScript or SQL Analyzer.

Tool selection
Choosing one's tools doesn't mean you choose a single toolset and than bang every nail in site with it. it means you first analyze the problem, then select the tools to do the work. Granted, today's desktop and workstation OS's have a great deal of tools and you can generally get by without ever leaving the one you choose, it just means a more limited toolset.

"Just" webservers
And an important detail to remember, there are a lot of very large businesses who aren't brick and mortar shops, whose webserver(s) is(are) their company. Just because it is common to bash web developers and "blogs" doesn't mean there aren't extremely critical applications (from these companies point of view) running from a web interface.

 
kHz said:
But with Linux the sweet spot for performance is 2-4 cpu's even though it can scale to 32 cpu's.

Not sure I agree with that ... we've got a few Intel x86's with 8 CPUs, and they run like the wind. Much quicker than the 32 CPU Solaris/Sparc T2000 we've got - and about half the price (both machines purchased within a few months of each other). Plus the Solaris/Sparc box only has 1 floating point unit. How crazy is that ?

--------------------------------------------------
Free Java/J2EE Database Connection Pooling Software
 
requires 14 hour perfect uptime every weekday "

Is that all, god your lucky. We're allowed, with some partners, about 4hrs downtime per YEAR. Unless we give 2 weeks notice and the work is performed on a Sunday evening.

"Most of our windows servers have rebooted since then, either due to critical updates to the OS or systems that run on them"

Don't you patch / update you Linux stuff? Many of our Windows servers haven't had patch updates for several months. You don't need to apply every patch to every server. You should see what are released and then decide if required. Hell we running an NT4 box, that has never had a patch. Never has a virus and has never been hacked. It get's a reboot about once evey 6 months.

O/S's are generally reliable, be they Window's, Linux, Unix, Solaris or Mac OS. It's the crap we stick on them that brings them down.

Stu..


Only the truly stupid believe they know everything.
Stu.. 2004
 
Don't you patch / update you Linux stuff?"
Unlike windows, Linux only requires a reboot when you update the kernel. And in production environments this is a rare occurance, perhaps once every year or two years.
It's not uncommon for the hardware to fail more often than you need to update the kernel.
The applications may need to be patched every few weeks if they are open to the internet and readily attacked, it just doesn't require a reboot.
 
Fair enough.

At the end of the day, I think you need to kill that legacy app and bury it deep underground....

Only the truly stupid believe they know everything.
Stu.. 2004
 
At the end of the day, I think you need to kill that legacy app and bury it deep underground....
Yep, getting there.

Although Legacy apps bring up another good point. I did contract work for a company that was still running a VMS server a couple years ago. They were just starting to switch to Windows 2000 server as a replacement. That application and the 15-20 that talked to it were allowed a certain number of hours of downtime a year before the EPA started laying on the penalties. I think SQL Slammer (sa password flaw? Sorry, not a systems admin) cost us in the 10's of thousands just in fines. Apparently the EPA doesn't take "Microsoft is down too" as an excuse.
A lot of production and manufacturing apps seem to be 3-5 years behind mainstream applications. These applications need near perfect uptime in most situations. Oddly enough the biggest players choose Windows these days. Even OPC (OLE for Process Control), the "Open" protocol of the manufaturing world is designed to use DCOM, though same people have faked it for other systems.

banks aren't the only places that need incredible uptime, a lot of manufacturing plants and power plants also require near-perfect uptime of certain systems. And I would not be at all surprised if there were a lot more VMS servers still sitting out there.

 
My linux box serves up login portals for 5 different hotels... it had *better* stay up, or my phone starts ringing off the hook.

It's a rock-solid machine, albeit on older hardware (an old Dell server with dual 300MHZ Processors, and 6x9GB Hard drives in Raid5 configuration).

I love that box. It just keeps tickin' along.

If I only had 14 hours of uptime a day, I'd be replaced. :S



Just my 2¢

"In order to start solving a problem, one must first identify its owner." --Me
--Greg
 
Tarwn, understood, but there is a big difference between a server running static web pages or a blog and one that runs critical database-intensive apps that a company's bottom line depends on, my point is that when one is looking at installed numbers to make decisions one should look at the latter type and not the former, let us not compare mack trucks with bicycles, there are many more bicycles, but what does that prove.

If more than 1 goose are geese, why aren't more than 1 moose meese??
[censored][censored][censored]
 
We 'had' to switch our e-commerce site to Redhat from a Win2003 cluster which was running swimmingly, for political reasons, well after 6 months of pure hell it still can't handle the load, it degrades so badly we reboot every 12 hours, I was never a 'Microserf' but I very much miss things working the way they are supposed to and getting proper support and upgrades. Now we are looking for an inhouse kernel programmer to try and stabilize it, new app dev has gone to zero, what did we save by doing this?
 
eyeswideclosed: I agree, I'd love to see some statistics that don't count webservers running non-business-critical apps. I agree that there's an awful lot of servers out there that aren't doing much of anything important. Lack of statistics shouldn't be considered supportive of a view point though.

gbaughma & StuReeves: That 14 hours is a time period every day (6AM EST - 8PM EST, M-F), not a total number of hours or total uptime. Technically that server has near-perfect uptime (except a motherboard that thankfully fried in the middle of the night a year or two ago and a planned power outage). However, since I am only required to have near-perfect uptime during that 14 hour stretch every day, if I ever had to reboot it for some reason I have a 10 hour period of time to do so when no one will notice :)


Personally I think 2003 has improved on the Windows server architecture, but there are still things I would prefer to run on a *nix system. I'm still not comfortable running industrial systems on windows.

 
Tarwn, yes but as I like to say, it's the carpenter not the toolbox, or it's the driver not the car, I can drive a Mercedes into a tree just as well as a Ford. Bad code can bring even an AS400 to its knees. We run "industrial strength" apps on win 2003 with no problems at all, but the code is very carefully written, has to be, one bad query can bring it all crashing down, no platform is immune to this, I can write a cartesian join query that can blow up a 'nix box as well as anything else.

If more than 1 goose are geese, why aren't more than 1 moose meese??
[censored][censored][censored]
 
can't handle the load, it degrades so badly we reboot every 12 hours
This isn't even remotely a RedHat Linux problem. Unless your site is being hit so hard that it degrades performance because of the number of connections, but then again, that isn't a Linux problem.

Don't spend your money on a Linux programmer to solve your problems. Hire an ecommerce architect or ecommerce and java performance and capacity planning consultant. If you aren't running an app server (e.g., WebSphere, Weblogic) then you are probably running on straight apache and have a backend database using php. Then you should look there to solve your problem.
 
I work for the government and right now I'm working to pull all of our old applications and data off of a mainframe and onto Windows Server 2003 boxes. We are rewriting the application to use asp.net with SQL Server 2005 backend databases. This is the second government entity that I've worked for and both have primarily used Microsoft technologies.

I'm really glad that I don't have to support mainframe stuff (or Linux for that matter), too complicated ;-)
 
It is quite funny to me that whenever a Windows server has problems, regardless of the cause, large numbers of people attack the operating system vehemently, but if Linux has similar problems these same people say 'It's not Linux, it's your code'. Well mate, the code is exactly the same, only ported over, worked fine on Windows, crashes Linux.
 
Then switch back we had similar probs and had to do so, will never go down opensource road again until it's ready for big leagues, not mature enough yet.
 
What don't you understand that it isn't the OS? If the situation was reversed (Linux to Windows) I still wouldn't say it is the OS.

You haven't even indicated what you use as far as the web server, app server, database, programming language, no. of connections at peak, but it ISN'T the OS.

Is it a three-tiered architecture or is it all sitting on the same server? If it is on the same server then what db were you using? You certainly didn't move a MS SQL server over to Linux. And if you were using MySQL then you certainly didn't move a Windows compiled version over to Linux. Same with Apache, if that is your web server, you didn't move a Windows compiled Apache over to Linux.

So what exactly did you do? Linux didn't cause your problems. You screwed yourself.
 
In another forum I read there are 2 members that have an ongoing argument of Windows v Linux. One member converted to linux, swears everything is rosy but still maintains a windows only box for the apps not on linux. THe other member is firmly entrenched in windows. The windows fellow has tried the various linux distros and is pretty even in his evaluation of how they work and why they don't work for him. The linux convert has adopted attitude that if it is open source it is worth the all the troubles to set up and make it work/do.

The reason for the background is that the windows guy has been trying to install on of the latest commercial distros on his test machine and it flat wont install. Even though this machine has had a previous version of this particular distro and many others that are not commercial and mainstream.

The reply from the linux convert has finally taken on the form that I hear about. "It's not a distro problem its your hardware"

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top