Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Enterprise Supercomputer?

Status
Not open for further replies.

bobcat

IS-IT--Management
May 15, 2001
53
US
This may not be the best forum for this question, but I don't think there's really one on the site for it..

I hear about universities, government agencies, etc chaining together a bunch of computers to make one big supercomputer. Is there such a thing for business use? I mean, we have a lot of desktops that are more powerful than the typical Joe Employee uses.. is there a program or something that can pull off those extra cpu cycles and plop them on, I don't know.. a HP Netserver running SQL & Baan? There would seem to be a market for it out there since it's really expensive to upgrade a server, but a lot cheaper to get a slightly larger processor when you're buying desktops. Besides, as long as you keep a decent flow of new desktops coming in, you may not ever have to buy a bigger server.

Maybe this'd work for RAM as well?

Just a thought.
Todd
 
For certain mathematically intensive applications, this kind of distributed computing already exists. For instance SETI@Home ( and INTEL's Peer-to-peer program (
What you may be refering to is the Beowolf LINUX variant supercomputing project. This does not allow general use PCs to donate CPU cycles to a distributed supercomputer because it requires specialised networking backbones between hundreds or thousands of identical PCs, as well as a specialised LINUX kernel.

Again it tends to be compute intensive applications where data can be doled out to individual computers (or groups of computers)and worked on semi-independantly. Typical apps are geophysics, pharmeceauticals and research physics.

For most business use this kind of cycle time sharing is not useful. Networks are currently too slow and unreliable within most businesses to make use of what would have to be very complex time sharing algorithms. OLTP applications again are simply not suited to this kind of treatment because, even with a kind of super-Napster distributed database, the amount of processing power needed to coordinate this would negate any cost savings motivating it's development.

So until 10Gb networks to desktop become a reality this kind of distributed supercomputer will remain restricted to a few niche areas.
Someone somewhere will be working on cycle hijacking but I doubt they will be willing to share their insights here. Ian

"IF" is not a word it's a way of life
 
Wow, thanks for the quick and very informative response.

Todd
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top