Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

MySql Overhead

Status
Not open for further replies.

Guest_imported

New member
Jan 1, 1970
0
0
0
We're looking for ways to store small amounts of data in standalone units running linux. The processor speed and memory in these devices are limited realestate which is why we're considering something like mysql.

What I'd like to know is if there are any white papers, bench marking information and just practical points of view from experienced users in regards to it's overhead and stability.

Much of the data stored by the devices will be simple writes, limited queries and limited to about 100,000 rows of data per device.

If you have any comments I'm all ears.

Thanks,

-john
 
I've seen MySQL take as little as 2 to 4 MB of RAM to run the daemon, but I have not spent any time tweaking it for a low-memory situation. I believe there are choices about how you compile it that will help.

From practical experience, I have compiled Apache, MySQL and PHP to work on a 486 with 8 MB RAM, under FreeBSD. It took about 3 hours to compile, but it worked. I used a Pentium 90 with 16 MB RAM as my development server for a complete e-commerce application, including shopping cart, data entry, banner ad management, reporting, etc... (also under FreeBSD) and the machine ran like a top. I noticed absolutely no hesitation even with 2500 products in the database, making some fairly complicated queries. The only reason I stopped using it was my 1 GB SCSI drive finally failed.

I know there are commercial databases meant to run under a more limited environment, but among the open source SQL databases, I would say MySQL is probably your best bet.

At one time I was researching a similar concept and compiled a list of low-memory database applications. If you want I will try to dig it up for you.

What exactly are your processor/RAM limitations?
 
I have a question related to the overhead. I have a customer (i only support the infrastructure) that wants to upgrade his mysql server. they are running linux with a single processor (600Mhz) and 1 gig of RAM. when i do a 'top' it says that each spawning of PID for mysqld that is running is taking up 152MB RAM. it is for an ecommerce system and each user request opens another PID and another 152MB RAM. they have used all the RAM in the box because of so many users and mysqld PID's. they would like to expand the system, because the web front end is getting 404 errors because they cannot open any more mysql sessions. the cpu is only running at 20 to 30%. my guess is that more RAM would allow more spawns of sessions, but my experiecne with btrieve showed me that there were usually limits to the number of spawns. how do i accurately size and scale the server AND tweak mysql to accept more connections? can i cluster to multiple servers? add more RAM???

please help ASAP
 
It seems to me that there are some database/application design issues here:

Do you mean to tell me everyone who accesses the ecommerce system via the web opens up a new MySQL (database) user session? Why? And every single user process takes up 152 MB? If so that seems awfully wasteful. Something is wrong here. I've never seen mySQL as a whole take up more than 30 or 40 megs of RAM.

How many hits per hour/day are they getting to this site? What server-side programming language are you using? Can you give us an outline of the database design?

Why not just run all ecommerce sessions under one database user? That's how my e-commerce software worked and I never saw anything remotely like the trouble you're having.

My understanding is that MySQL is not so great at supporting many users, but that it is extremely quick at serving one user many times over. This makes it perfect for a web application because there need be only one user (the webserver/scripting process) making many quick requests.

If the e-commerce application is written in Perl, then there might be a scaleability problem: for every request, Perl forks a new process. Maybe it also starts up a new database process for every request (I have no experience using Perl to access MySQL. The way around this (I'm assuming Apache) is to use mod_perl or FastCGI, which can run all perl requests in a single process.

I recommend PHP for a server-side programming language, and I recommend FreeBSD over Linux for stability and performance. As far as hardware scaleability, of course you can use a quad-processor machine, 4GB RAM, lots of swap space, and RAID arrays to speed things up. I doubt you need a cluster yet, and that brings up a whole new series of headaches with application/database design. If a multiple processor intel machine is not enough (I can't imagine why), your client should consider a Sun or Alpha multiprocessor box.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top