Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to insert a binary data size of 4GB into a longblob column.

Status
Not open for further replies.

crazyODBC

Programmer
Jun 7, 2007
5
US
I was using unixODBC as the driver manager and mysql ODBC driver to connect to the mysql server. I am using Linux platform and coding with C++. I had created a table with a column type of longblob and wish to use ODBC function call to insert a 4GB of data into the mysql database. At first, I got error saying that the data size is over the limit of max_allowed_packet size, so I changed the max_allowed_packet to around 1GB (that is the max max_allowed_packet variable can take). Then I start to sent a 1GB data to the mysql database. It will complaint about "Out of memory (Needed 500000048 bytes)". Seems like I could only sent max of 500 MB of data, otherwise the mysql server will complaint about the above out of memory problem. Not sure how to get around this problem. Could any help, Thanks in advance.
 
Hi

Absolutely no idea, but what about creating abit more swap for the time of that operation ?
Code:
dd if=/dev/zero of=/tmp/extra-swap bs=1M count=1024
mkswap /tmp/extra-swap
swapon /tmp/extra-swap

[gray]# do the huge insert[/gray]

swapoff /tmp/extra-swap
rm /tmp/extra-swap

Feherke.
 
First, be careful about the real maximum size of your BLOB -- according to the manual, that maximum size is OS-dependent.

When I began researching this question, it seemed reasonable to me to expect MySQL to support a mechanism for "chunking" data to and from BLOB fields. Other RDBMSes, I understand, support this. However, it looks as though that's a feature still in development in MySQL.

Have you considered breaking your data into uploadably-sized chunks, storing the chunks in individual rows in MySQL, with each group of chunk-records having a common ID and each chunk-record in a group having a serial number? Your code could then reassemble the data at need.



Want the best answers? Ask the best questions! TANSTAAFL!
 
Assume I break the 4GB into 512MB chunks, there will be a 8 pieces(rows) sharing a common ID. So the problem with storing 4GB seems to be solved. When it comes to retrieve data from the database with this common ID, it could be another problem because it will retrieve 4GB in multiple rows, which Mysql will definitely complaint. When I mean by 4GB, that is the maximum, user can decide to use 2.5, 3.5 or any size data, so the number of chunks per a ID will be different. The serial number for each chunk is not visible to the user, instead only the application will recognize its existence. Assume a program constantly start and stop (object constructed and destructed). There must be a file which map the relationship between a ID to its related serial numbers (could be another table), so this program, when start up next time, can load this information. Such a painful extra work. Not sure if other databases have this kind of size limit.
 
Just out of curiosity...why store such a large object in the DB? Is it possible to store the object on a file server and enter the location of the object into the DB as a path to the object? I've seen someone suggest that in this forum before as a more practical solution.
 
I thought of that too. But that is what my boss wants, so I just have to follow the direction. He might want to have every transaction going through the database.
 
Could anyone further assist me on how to clear the buffer in the ODBC driver. The reason I asked this is that I have a select statement and a Insert statement, and they are prepared once and executed multiple times. I use select to check if a record already exist and retrieving data. The insert is used to inserting 512 MB of data into Database. But I found that replacing a existing data with 512 MB with new data of another 512 MB generate errors. So, what I try to do is to check if a match in the database exist, if so, delete it first before store it with new data. But I still get the following error messages: (Second time execute select and insert)

Out of memory (Needed 536870960 bytes)
tcMrdDbIF (0:Test_01): Error excuting SQL select statement
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: Return Result : 0, SQL_ERROR
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL Native Error : 2008
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL State : HYT00
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL Error Message : [MySQL][ODBC 3.51 Driver][mysqld-5.0.27-standard]MySQL client ran out of memory
tcMrdDbIF (0:Test_01): Error storing data into MRD database
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: Return Result : 0, SQL_ERROR
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL Native Error : 4001
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL State : HY000
tcMrdDbIF (0:Test_01): SQL Record Diagnostics: SQL Error Message : [MySQL][ODBC 3.51 Driver][mysqld-5.0.27-standard]General driver defined error

So, I was wondering it might be that the buffer stored in the driver didn't get clean up from previous select and insert calls.
 
i appreciate that this is an old thread but I have a similar issue which caused me to stumble upon this.

have you (the OP) tried storing the file in the filesystem and then inserting it to the database with a 'load data in file' query?

on the way back, you would use a 'select data into outfile' method and then deliver the file (i assume this is the ultimate intention), using some sequential read/write method like readfile or fpassthru

alternatively rather than storing the file in sequential rows could you not simply write to the same row/column through successive updates? on the way back, i think you'd still need to use the above method or some kind of select sub string with sequential writes to a file handle.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top