Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to import form dbf bigger than 1.8gb 3

Status
Not open for further replies.

Rajesh Karunakaran

Programmer
Sep 29, 2016
542
MU
Dear all,

I have a table at client place which has gone size around 1.8 gb. When I try to import from it onto the new database structure, I am getting "size too big" error for obvious reasons. Is there any way/work arounds that I can import from it?

Rajesh
 
Well, as I'm sure you know, simply importing it into a new database won't solve the size problem. Are you perhaps planning to reduce the number of fields or the size of the fields at the same time?

You could try copying it to two smaller DBFs, then reduce the size of those two files (by reducing the number / size of fields), and then recombining them.

Or perhaps copying it to a CSV file, and work on that to reduce its size.

In the longer term, you may need to redesign the entire table to avoid the problem arising again.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Dear Mike,

I think, I had not explained well. So, your query is obvious.

I have a new version of the table, in fact with more fields than the one I am trying to import from. Usually I simply do ZAP in the destination table and import from the source.

I know, I will have to redesign the table by normalizing it, breaking specific purpose fields to separate tables etc. But, I cannot do that immediately and straight away. That's why I was wondering if there is any way to get the import done somehow!

Rajesh

 
Well, you surely can't import all of the existing DBF into a table with more fields, which grows to the 2GB limit earlier than the original table.

You could, however, import any subset of the 1.8GB table into new DBFs with subsets of the fields.

I wonder why your first step must be to import everything into a larger set of fields. You can always design your normalized final target tables and import data into them step by step. Maybe you need to explain the problems you face with such an approach, but as far as I did migrations, I often used original source data for multiple target tables and thus used the original table multiple times as an origin of data, instead of first designing an intermediate table to accumulate all data into.

And even if you have to, you could work portion-wise.

As we recently talked about VFPA, you know even in its current version it allows the fpt file to outgrow the 2GB limit. That could help, perhaps, if you make use of it and change fixed length char fields and also varchar fields you might have into memo fields.

Well, of course, you could work with SQL Server or any other database for the intermediate steps to have more freedom with the intermediate tables you need for the overall transformation process of your data normalization, VFPA isn't the only alternative allowing larger data sets. I don't know why you would need even larger intermediate tables, just notice that the theoretical concept to normalize data in steps that start from a denormalized form are steps you don't need to do for real, just conceptually and then can also do the necessary transformations from where you are to the final normalized data structure without intermediate steps that blow up the need for storage space.

Chriss
 
Another way would be to import it in two halves, the first 50% into tableA, the last 50% into tableB


Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

There is no place like G28 X0 Y0 Z0
 
Rajesh,

I understand that this is an immediate problem, and you don't have time just now to think about normalising the data, or moving to a back end like SQL Server, or taking other similar drastic action. But there are surely still things you can do to reduce the size of the table.

Does the table contain any particularly large character fields? If so, are you sure you need the full width of those fields in every case? Would it pay to make some or all of them into memo fields? (And don't forget that PACK MEMO can eliminate any bloat in the memo file.) Those actions would not require any changes to any of your code.

Or how about splitting the table vertically? In other words, you make it into two tables, with some of the fields in one table and rest in another, then use SET RELATION to establish a one-to-one relationship between them.

You also mentioned that the new table will contain more fields than the existing one. If that's so, I can't see how that would work. If the existing table is near the 2GB limit, surely there new one would go over that limit?

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
I think I didn't uderstand your situation. Id the new table structure with more fields for a new version of your software that needs additional fields? If so, for sake of simplicity I'd suggest to split horizontal, i.e. have two tables into which you import 50% of the data each.

I like Mikes idea of a vertical split with a relation but a horizontal split of the data is easier to cope with in terms of no code changes needed. At least if you can manage to split up the data so you only need to use one of the two tables which has the most recent and most used data in it and put data that's old and usually unused into the second table. It might not work out to partition data that way, then you can use UNION queries. It needs new code, but is perhaps better managable than doing the normalization into a new table structure.

The other logical vertical split is at the new fields, i.e. you keep the old table structure and add new fields to a new table that only shares the primary key as foreign key. So that's taking Mikes idea, but very concretely splits off the new fields, the advantage of that is that old records sta as they were, you don't have data for them in the new fields anyway. So only when you add new data or modify old data, you would add a secondary record in the second table.

All under the premise of the assumption this is for a new functionality of your new software version you also should very well know what you added in code to care for these new fields, so only that part of the software would need a further update to join the secondary record in, where the new fields are used. Old parts of the software not using the new fields can stay as they are and also don't need a change from old usage of the table to a UNION query.

Chriss
 
Dear Mike, Chriss, Griff,

Thank you all.

Yes, I don't have time now to Normalize or reconstructing the whole database structure. The new version with many changes are already in place. But, yes, I won't be using the new structure and import data from the current huge table. Instead, I am verifying the fields and wherever possible, reduce the field sizes (Mike had first suggested that). I just found that it has come down to 1.4 g.

Rajesh
 
Getting it down from 1.8 to 1.4 sounds like a good result. Of course, you will need to keep an eye on it for the future. If the table starts to grow again, you will eventually reach the limit. But with luck you will now have time to think about a longer-term solution.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 

Hi,

VFPA 10.1 supports files over 2 gigabytes:

Maximum size of a file: 2048 TB.
Maximum # of records per table file: 1 billion
Maximum size of a table file (or cursor): 1 billion * RECSIZE(), if RECSIZE()=65500 (maximum), it is 65 TB.
Maximum block of an FPT file: 2 billion.
Maximum size of an FPT file: 2 billion * SET("BLOCKSIZE"), if SET("BLOCKSIZE")=64 (default setting), it is 128 GB.
Maximum size of an index file: 4 GB (double of VFP 9.0).


Best regards,

Pieter
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top