Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

AIX UNIX VS IBM xSeries Linux System Servers 2

Status
Not open for further replies.

tms05

Technical User
Apr 2, 2003
21
US
We are considering moving from AIX UNIX to XSeries Linux. Does anyone have any insight as to the pros and cons. Will this allow for VBA coding and FTP automation? We were told that the xseries will run faster and speed up our processing times. We use transoft as the driver to bridge into windows. Does anyone have any suggestions or comments regarding the differences between the two?
 
Nope but if you have power hardware have you considered runnng linux in it instead of xSeries, or even linux apps on AIX?
 
At the company I work for Linux has a problem with filesystems that go read-only. Suffice it to say, we opened a ticket with Red Hat and they have never given us a solution, so it continues to happen. We have RHEL NFS clients on Xen virtuals and when the NFS server goes down the NFS clients on virtuals get stale NFS handles although the RHEL physical servers don't exhibit the problem. Again, Red Hat has never provided a solution. If you use SRDF and flip to another data center and make them read-write, when you return to your primary data center and make those devices read-write and secondary read-only, you may get errors like we do. Yet again, in over two years Red Hat has never provided a solution or even an answer why it occurs.

You may have better luck with xSeries hardware, but at my company we are using two other vendors and have frequent hardware problems. Some of the problems we have with servers crashing could have been caught if the hardware and OS were tightly coupled like AIX/POWER and Solaris/SPARC.

I have been trying to get SystemTap (the Linux solution to DTrace) running on RHEL to solve a problem. It is a manual solution, meaning if the server is upgraded you have to manually install files again. To make matters worse it is different for RHEL 4, 5 and 6!

Linux is just a kernel and not a complete operating system where userland and kernel are integrated. Henceforth, when Red Hat or Novell decide to add some new feature it is probable there will be problems with versions. Also, Red Hat likes to change paths like the nonsense where they want to move everything from /usr/sbin and /usr/bin to one location, which of course will break all of our in-house utilities. Or the brainchild being thought up by Red Hat to replace syslog with a new utility that will create binary files that are not grepable or awkable.

Oracle is adding kslice or ksplice to their Unbreakable Linux in an upcoming version from what I read. Yawn. AIX has had multibos since 2006 or 2007. There isn't anything to upgrade Red Hat or SLES like you can with Solaris' LiveUpgrade or AIX' alternate boot install. The Linux LVM is nothing like the AIX LVM and in fact, as unworldly as the Solaris Volume Manager is, it is better than the Linux LVM.

There are no disk management utilities in Red Hat like rmdev or rmpath for AIX or luxadm or cfgadm or devfsadm for Solaris. If you need to remove multiple disks from your server, or add multiple disks to your server for Red Hat it can take you an hour or longer depending on how many LUNs you have to configure.

AIX and Solaris are years ahead of Red Hat and SLES in features. Sometimes Red Hat likes to mention they are adding this or that and like I alluded to earlier, it likely already has been in AIX and Solaris for sometimes a decade. Red Hat's Cluster Suite that they use as a PowerHA solution is a joke. The AIX kernel has taken some of the HACMP/XD components and has more functionality than Red Hat Cluster Suite. GFS on Red Hat is broken and we have been removing it wherever we can. Again, Red Hat has provided no solutions to GFS problems. We have two- and three-node clusters that constantly are rebooting because of cman or some other cluster problem and like a broken record there are no answers from Red Hat.

Linux is technically inferior to AIX and Solaris. For one reason, AIX and Solaris were engineered and are developed by profesisional engineers to this day. Some will point out that HP and AIX among others have commits to the Linux kernel, and while this is true, they all have competing interests, and as such, you think that leads to a stable kernel?

There isn't one area of Linux that I can point to and say it is better than AIX or Solaris.

You may not have any influence on the decision, much as I have none if my company, but what I have given you above is truthful and accurate and should be considered in any decisions.
 
Brilliant post. Thank-You blarneyme.

You get what you pay for.

Linux is great - when it works, if it doesn't then you can only hope for a fix.

Proprietary operating systems - if they are broken then the vendor will fix the problem as soon as they can, once you let them know you have found the problem - it is in their best interest.
 
I failed to mention that one incident that was opened with Red Hat, they did identify what the problem is, but it lingered for a LONG time because they released it to "the community" and hoped someone would fix it for them for free. After months, they finally decided they would have to provide a fix themselves. That doesn't happen with IBM or Sun (when they were around) and wouldn't happen with Oracle.

Many companies attribute cost as the reason to move to Linux, but that is a red herring. Take Red Hat's Satellite server into account. You have to pay for management and provisioning licenses for your servers, whether they are physical or virtual. When there are thousands of servers like we have, that adds up to hundreds of thousands per year for support. And I call upon you to re-read our experiences with Red Hat support.

And remember, in RHEL6 they don't support Xen so we have to move to KVM. Red Hat's solution to manage KVM is RHEV-M. Red Hat is not an innovator or a developer, they scour Sourceforge or find something to buy and use it. All companies do this, but realize that RHEV-M is a WINDOWS-ONLY product that was developed in .NET! There is NO command line for RHEV-M. Red Hat is trying (and has had a couple of failures - surprise) in trying to port the .NET software to Java so it can run on Linux. They told us they will work on adding command line utilities after they get it ported. Not to mention, the cost of managing those KVM virtuals through RHEV-M will cost you!

That is no joke. Just the cold, hard facts you should consider if you want to tackle the painful beast.

On our SRDF problem with EMC they said they didn't have an EMC array for testing. You would laugh if it wasn't serious business, because aren't they in the business of supporting their customers? IBM has Sun storage, HP has IBM storage, Sun had IBM storage. They do that to service their customers. Not Red Hat. They'll just say they don't have some software or hardware and that isn't in their contract to support anyway.

As DukeSSD mentioned, you get what you pay for. Consider the fact that IBM spends $7-$10 billion per year on R&D (depending upon the year), MicroSoft spends $10+ billion per year on R&D, Apple even spends a billion per year on R&D. Red Hat spends less than $100 million.
 
Thus, of course, Linus didn't sit down in a vacuum and suddenly type in the Linux source code. He had my book, was running MINIX, and undoubtedly knew the history (since it is in my book). But the code was his. The proof of this is that he messed the design up. -- Andrew Tanenbaum

That quote alone should end any arguments about wanting to use Linux.
 
We run SUSE and Ubuntu server on IBM X-Series hardware. We have more than 20 servers, running everything from SAP to a major website. We used to run mostly on p-series/AIX.

There is no doubt that p-series/AIX is more advanced in many ways.

BUT

We get much better I/O throughput on our X-series boxes. They are also much cheaper, which means we can buy more of them instead of shoe-horning everything onto a p-series box running multiple LPARs.

The amount we pay for yearly maintenance on our p-series boxes is often enough to outright buy several pretty capable x-series boxes.

So, on balance, for us it was worth switching to x-series/linux from p-series/aix.
 
My company, a Fortune 500, is using RHEL. They only stick the name enterprise in there to make it sound good, but it is anything but enterprise-ready.

Support is abysmal. We have had one case open for 1.5 years. Another case we have open has actually been on Red Hat's books for almost 7 years and they haven't done anything about the problem.

Filesystems go read-only. No disk management utilities. No way to upgrade servers to a new version. Their clustering solution is a joke and doesn't work; it is broken and they have never provided a solution.

We don't have xSeries hardware, but have purchased from another vendor and the crap is constantly having problems.

Then you get to Red Hat's solution to KVM management, and that was to purchase a Windows-only product developed in .NET and then port it to Java. Complete without any command line utility. Yeah, real enterprise-ready that is.

RHEL and Linux is just a sad, broken, piece of crapware.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top