Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

8hr mysql limit?

Status
Not open for further replies.

svar

Programmer
Aug 12, 2001
349
GR
On the mysql manual, section 4.12.16 Mysql-Win32 compared to Unix MySQL it sas that:
"Mysql uses a blocking read for each connection. This means that:
A connection will not be disconnected automatically after 8 hours, as happens with the Unix Version of Mysql".
This is of interest, because I have an application running continuously(on Linux) and filling in a Mysql database.
If the connection hangs up, this probably means that
I will get corrupted tables among other things.

I wonder if this is still true with recent
Mysql editions and if there is a way to disable it on Unix

Thanks,
svar
 
What kind of application is it? Most applications that access a database make queries or groups of queries and close connections when done with that query or group. I think this is probably a simple misunderstanding. It is not necessary (or efficient) for an application to maintain a continuous connection to the database. This is why, in the MySQL C API, the PHP API, or the Perl DBI layer, you see reference to msyql_connect, mysql_pconnect, etc...

In your case, your application could use mysql_pconnect(), each time it makes a query. which means that when your application accesses the database, it will leave that connection open waiting for the next connection call. If mysql_pconnect() finds an already-open connection, then it will use that connection. This way, even if a persistent connection times out, your application will simply open a new one.
 
Thanks. Still not clear to me. From your reply
I get that
1) the 8 hr limit holds
2) mysql_pconnect may be better, but I doubt it will
be safe: My application wants to transfer data stored
in some sort of zipped(not normal zip/gzip)
files on a remote server into tables on my machine.
To this end the application opens bot a telnet and an ftp
connection to the remote server, runs a decompression utility on the server, gets back the results as an ascii file, parses it and stores it in the appropriate database table. The list of files increases every 2 minutes
and the files are cyclicly rewritten every 4 months or so.
I am starting with some 3 months delay(that is there is already 3 months data in there), but the whle process is faster than the 2 minutes it takes for a new file to appear.

I am worried about the limit because this is not just queries being run; there are open database connections and
if disconnect occurs when the application is writing data to a table(and probably even if it does not ) the tables will
be corrupted and mysqlisamchk is a pain because there are just too many tables in the database.
Are my worries correct? Any suggestions?


P.S. I had a SCSI termination error on my disk, so I moved
everything to another disk, but I get some error when starting mysql:


010821 12:49:02 mysqld started ^G/usr/sbin/mysqld: File './quality4-bin.index' not found (Errcode: 13) 010821
12:49:03 Could not use quality4-bin for logging (error 13) 010821
12:49:03 /usr/sbin/mysqld: Can't find file: './mysql/host.frm' (errno: 13) 010821 12:49:03 mysqld ended
010821 14:23:46 mysqld started ^G/usr/sbin/mysqld: File './quality4-bin.index' not found (Errcode: 13) 010821
14:23:46 Could not use quality4-bin for logging (error 13) 010821
14:23:46 /usr/sbin/mysqld: Can't find file: './mysql/host.frm' (errno: 13) 010821 14:23:46 mysqld ended
~
Before that I had another error(I believe the difference is that there was a socket on the original disk which I had not deleted yet):

010817 19:08:58 mysqld started
010817 19:08:58 Can't start server : Bind on unix socket: Permission denied
010817 19:08:58 Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
010817 19:08:58 Aborting

010817 19:08:58 mysqld ended

any ideas what is wrong?
Thanks, svar


 
Either you don't understand what I am saying, or I don't understand what you are doing.

Are you telling me that you have a single INSERT query that takes over 8 hours to process?

Good... I thought not ;-).

Now. I know it's posible to just open a connection and send many queries to MySQL, but I'm sure your application runs in some sort of a loop, where it connects to the remote server, downloads data, then runs a query to insert that data.

In that case, the standard method is to open the mysql connection when your data is prepared for insert, then run the INSERT query, then close the connection. Thus for each iteration of your loop, your total connection time is probably only a few milliseconds. MySQL is very efficient at opening and closing connections.

You can use persistent connections which will optimize your application just a little bit more, since each loop can use an existing connection if available. I suppose your concern with persistent connections is that there might be a connection that has been open for many different loops, and is approaching 8 hours of "live" time, so it might just close in the middle of the next query. I doubt that MySQL works that way, though. I think it's more likely that older connections are "retired" after a few minutes. You might want to subscribe to the MySQL mail list and ask that question there.

However, just to be safe, use the standard mysql_connect(). I don't think will have any performance problems using connects and disconnects in your loop. It's definitely safer than just leaving a connection open indefinitely.
 
Thanks.
Yes, it is clear now. I was under the impression that
opening it once and leaving it open was in some sense
desirable. I now understand this was probably very dumm.
Now, I still have a lot of stability/corruption problems.
That is(well, I am only now now running with opeining the connection only when needed) For example on doing mysqldump
I get error 127 . Not all the time, but enough times
to make this process a pain.

I have found that if DBI dies for some reason, then
there is an almost certainity that at least some tables are corrupted.
In most cases myisamchk is able to only recover part of the records
(from 50 to 80-90%)For recovery,the perl script uses a system call to
mysqldump every N files that are parsed.
Unfortunately, this gives some problems too. Sometimes
mysqldump does not complete and sometimes
even though it completes, one cannot subsequently recover
all data because mysql ... <backup.sql complains about duplicate
primary keys. This appears to be traceable to the creation
of files /var/lib/mysql/host-name-bin.xxx . These files
appear to contain sql statements like what is produced by mysqldump
Not only can these files get quite big, taking a lot of space
in /var/lib/mysql(this is a mount point in a logical volume group)
but I also found that removing the last one was essential for being able to
get rid of the duplicate primary keys message. Removing these
files(at least the last one) allowed the recreation of the database tables
from the mysqldump obtained backups
Another issue is the temporary files in TMPDIR;
According to the manual one can control where these will go
by editing TMPDIR in safe_mysqld; I have not found any such line in safe_mysqld

Thanks again,
svar
 
No, the configuration file you want is called &quot;my.cnf&quot;. With a default install of MySQL, you are not yet using the configuration file. MySQL runs with defaults unless you place &quot;my.cnf&quot; in /etc. Look for these files in your mysql directory or in your original source directory:

my-huge.cnf
my-large.cnf
my-medium.cnf
my-small.cnf

Each of these files is a sample configuration file for different size server. When you find the one you want, just copy it to /etc/my.cnf, and edit it there for your specific needs. Then restart safe_mysqld and test out MySQL.

If you have command-line access to your machine, I recommend you learn how to administer MySQL from the command line, including the usage of mysqldump, myisamchk, etc...
 
Thanks a lot, this appears to solve everything.
Indeed I had an /etc/my.conf corresponding to a small system,
though I have lots of RAM. So it appears that the 16M
set for say mysqldump were insufficient(the dump files were
larger than this, hence the corruption and crashes etc)
With hindsight it makes sense, but it would have been
far from obvious without your help.

Incidentally, I am using command line for these commands,
but to automate them I also have them in a perl script as
system call (e.g.
system(&quot;mysqldump -u user --password=my_pass --quick MYDATABASE >backup.sql&quot;)

Thanks once again for your invaluable help
svar
 
Maybe I was too quick to rejoice. I now got a &quot;Mysqlclient run out of memory error apparently when tryingto
get the MAXIMUM value of a field in a table with some 36117
records;
Here is the printout

tablename=CDR_OUT_49_1 relcode=16 anum=980 bnum=4311359
nata=Inl natb=Nat opc=4129 dpc=5136 value=6.65590608950844 durat
ion=26.9 dest=49 type=OUT
tablename=CDR_OUT_49_1 rindex=36117 dbh=
Out of memory (Needed 8164 bytes)
DBD::mysql::st execute failed: MySQL client run out of memory at cdrpro19.pl lin
e 1169, <IN> line 23511338.
Database handle destroyed without explicit disconnect, <IN> line 23511338.
svar@quality4:~/CW2 >

this happened in the following perl routine
note that I test the value of dbh to make sure it is undefined so that there are no dangling connections.

sub get_count{my $tablename=shift; my $rindex=shift;
my $file=shift; my $fileindex=shift;
print &quot;tablename=$tablename rindex=$$rindex dbh=$dbh\n&quot;;
my $dbh1=DBI->connect('DBI:mysql:CD2','svar','mypass',
{RaiseError=>1,AutoCommit=>1})
|| die &quot;Database connection not made: $DBI::errstr\n&quot;;
my $sth2=$dbh1->prepare(&quot;SELECT MAX(numindex) FROM $tablename&quot;) or die
&quot;Cannot prepare SELECT MAX FROM
tablename=$tablename statement for file=$file index=$fileindex err=$DBI::errstr\n&quot;;
unless ($DBI::err){
$sth2->execute();
$$rindex=$sth2->fetchrow_array();
$$rindex++;
print &quot;executed sth2 1 index=$$rindex for $tablename\n&quot;;}
&close_database($dbh1);
}#end of S/R get_count
1;
ANy hints?
Thanks, svar
 
I am going to look at $dbh->Kids and active kids to see
if I have leaks there. Probably this is the problem...

svar
 
Hi svar,

when you post your code, try putting it inside the TGML code tags (click on the &quot;Process TGML&quot; link below). It makes it much easier to read.

It might just be that your query or combination of queries used too much memory, period. You might need a more powerful computer.

Also, your error message says &quot;MySQL client run out of memory at cdrpro19.pl line 1169, <IN> line 23511338.&quot; line 23511338?! What are you working with that has 23 million lines? Am I missing something here?
 
Thanks again,
well the only thing I can think of is that the file I am reading may be large; that is possible,
but I do not see why this affects mysql. Does it mean that perl gobbles up too much memory to read
this file and does not leave enough for mysql?
There are no queries used except get the maximum value of the field in a table, and insert s
now each such insert or maximum get opens a dbi connection, prepares the statement , executes it
and closes the statement handle.

Otherwise the PC is

RAM: 2GB
CPU 2x PIII 1000MHz
3 x37 GB SCSI disks

in other words not bad at all.
thanks again, svar
 
Yeah, I agree... it doesn't make sense, The queries themselves should be no problem for MySQL.

Except I still have to focus on your error message &quot;<IN> line 23511338&quot;. Is <IN> your filehandle? Are you evaluating that file one line at a time, or are you reading in the whole file to an array? Is that how many lines are in the text file being you are processing? How big is the text file (#lines, #MB)?

Have you tried running &quot;top&quot; or some other CPU/memory resource monitor to see what's happening while processing this?

Here is an extreme method of debugging, if you really keep having this problem: instead of running your queries through mysql, print them each out and log them into another text file. See if your script can do that without a memory problem. Then, read your logged queries, and make sure that they look right.

If you still don't see any problems, let me know.
 
Thanks, will try.
Regarding <IN>, yes, it is the filehanle I am readingthe
data from. It size is variable, but it can be as large as 28MB and 1.5 million lines. NOt your ordinary file,
but true.

As I understand my options
apart from splitting it are

either
while(<IN>){...

or
my @tmp=<IN>;
foreach my $line(@tmp){...

I am usingthe first method.
which should be best if @tmp is large

Maybe I will try splitting the file into smaller chunks.
I/O operation sare expensive however.


svar
 
I did basically some clean up(mostly getting rid
of a primary key field and calling $sth->finish()
on perl statementhandles and things look better, but
I get a screen hangug(not even alt+ctrl+backspace or alt+ctr+delete will kill it) and upon rebooot i get a message Hardware Monitor found an error: enter power setup menu for detail., without any helpful hints(I had disabled power management)

All these are not relevant to mysql, but what is relevant is
that
the periodic dumps went ok UNTIL the size of the dump.sql
files cross the 16M limit. Then they become much smaller
( I was not doing a where dump), so it appears that my.cnf
does not allow the size of mysqldump to be > 16M

Incidentally the corruption starts exactly at the
place the mysqldump file is truncated.

Am I doing something wrong in my.cnf?
Or is somehow the size of my db still limited to 16M?

I am attaching my.cnf

# Example mysql config file for very large systems.
#
# This is for large system with memory of 1G-2G where the system runs mainly
# MySQL.
#
# You can copy this file to
# /etc/mf.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /var/lib/mysql) or
# ~/.my.cnf to set user-specific options.
#
# One can in this file use all long options that the program supports.
# If you want to know which options a program support, run the program
# with --help option.

# The following options will be passed to all MySQL clients
[client]
#password = your_password
port = 3306
socket = /var/lib/mysql/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-locking
set-variable = key_buffer=384M
set-variable = max_allowed_packet=1M
set-variable = table_cache=512
set-variable = sort_buffer=2M
set-variable = record_buffer=2M
set-variable = thread_cache=8
set-variable = thread_concurrency=4 # Try number of CPU's*2
set-variable = myisam_sort_buffer_size=64M
log-bin
server-id = 1

# Uncomment the following if you are using BDB tables
#set-variable = bdb_cache_size=384M
#set-variable = bdb_max_lock=100000

# Point the following paths to different dedicated disks
#tmpdir = /tmp/
#log-update = /path-to-dedicated-directory/hostname

[mysqldump]
quick
set-variable = max_allowed_packet=256M

[mysql]
no-auto-rehash
#safe-updates # Remove the comment character if you are not familiar with SQL

[isamchk]
set-variable = key_buffer=256M
set-variable = sort_buffer=256M
set-variable = read_buffer=2M
set-variable = write_buffer=2M

[myisamchk]
set-variable = key_buffer=256M
set-variable = sort_buffer=256M
set-variable = read_buffer=2M
set-variable = write_buffer=2M

[mysqlhotcopy]
interactive-timeout
 
Just a quick note to say thanks and problem solved.
The problem was that I needed to pass the parameter
aic7xxx=seltime:0

to lilo with my SCSI adapter AHA29160
as timeouts were responsible for the problem and
corruption.

thanks again, svar
 
I have been unsuccesfully trying to establish a connection with MySQL. The message from the SQLException is that of Bad Handshake. Does anyone know if there is any other cause other than using the wrong driver for this type of exception ?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top