Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Query (select/insert/update)Slows when table grows

Status
Not open for further replies.

ranaaich

Programmer
Jul 18, 2005
4
US
Hi All,

I have a mysql database where tables are very simple. An example table will look like this:

create table myTable(
token varchar(255) not null primary key,
token_count int default 1,
frequency int
) type=INNODB;

A java program populates this table. The program reads from files (in batch mode) and inserts into myTable with following logic:
(1) See if the String to be inserted is present in myTable.(Uses SELECT)
(2) If not present the insert into myTable (uses INSERT)
(3) If yes update the table after adding the current frequency (USES UPDATE)

After processing each file issue COMMIT.

This goes on well for some time - but after the Table grows over 100,000 rows the process slows down considerably. The Strings I 'm handling is very large and I'm not sure how well Mysql behaves with very large String as primary Key.

My select and update statements are all based upon primary key(token in this case) - so question of setting index doesn't arise. Even then I have set index on token field - but it has degraded the performance. Also I have set the query_cache_size to 1MB even then it is not enhancing the performance.

A sample file has almost 30,00 to 50,000 lines. Each line having a string + blank + frequency.

After processing first few files - the Java program takes large time to process a single file - and subsequently the processing time increases exponentially.(as the tabel grows in size it slows down).

Any suggestion how to improve the performance will be greatly appreciated.

Thanking all in advance.





 
What is taking all the time? Without any more information I would guess that it's the query and I would recommend indexing the table.

You could also simplify your processing using "replace into" (non-standard SQL), but that would benefit from the index, also.
 
I have tried that too. I have created index on Primary Key! And that slows down the process.

I found that update operation for some tokens(primary key) takes almost 100ms after rows grow over 120,000.

Do you think a primary key varchar(255) can be problem?
If so should I change the Table design?
 
You must have something bizarre with your system, perhaps it's not a database issue. A co-worker of mine maintains a system that polls about 3500 network devices every 5 minutes and writes that data into a round-robin DB maintained in MySQL. The table has on the order 300,000 rows in it and maintains about 30 insert/update/delete transactions a second.

I know MySQL can keep up with that amount of data, you'll probably need to do some profiling on the database to see what's slowing it down. I'm afraid I'm not enough of an expert to advise you on how.
 
Thanks for your help and advice. Sorry for this late reply. Actually I was busy in Upgrading to latest MySQL(4.1.14)version and get the latest JDBC connector. And my program's performance has improved dramatically. I was using 4.0.* version which was quite outdated. Now all is fine and query is being done smoothly even if the table size grows!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top