Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Why Are My Memo Files 2GB

Status
Not open for further replies.

jjjt

Technical User
Sep 15, 2002
34
GB
We have a problem where at times our users experience an error saying: "There Is Not Enough Disk Space For....."
The message ends with the name of a memo file.

As it happens there is loads of space for the memo file over 90GB but the memo file itself is just over 2GB. The corresponding table has 8701 records with 13 memo fields in it amongst the other fields.

After a PACK the memo file goes back to being smaller than the table.

Why is the memo file getting so large. I take it there is a limit of 2GB on memo files causing the original error?
 

Take a look at thread1252-990207

Mike Gagnon

If you want to get the best response to a question, please check out FAQ184-2483 first.
 
But how can they be getting to 2GB in the first place?

When I did a PACK the Memo File went from over 2GB in size to 326,436KB. I assume that the data stored in the Memo File represents 326,436KB what was the rest taken up with?

 
jjjt

The memo file format is somewhat inefficient, and they suffer from 'bloat' unless the contents of each memo remain fairly static and relatively small.

If you are using them to store binary info (images for example) they can become very big indeed.

There is no easy way around this, unless you use some kind of third party software or try to minimise mods and adds to the memo fields.

Why not take a look at the contents and see if there is an alternative way of holding the data - separate files perhaps with a 'pointer' to the location instead.

Regards

Griff
Keep [Smile]ing
 
These memo fields only hold text. No pictures or anything like that.

This would mean over 1.5GB of "bloat" that would make it increadibly inefficient!

Surely it cannot be right?
 
One thought - we never had this problem before we started to use v8 of VFP. Could this be it?

 
Hi jjjt...

That doesn't sound right does it!

A few questions:

How many records, how many memo fields per record, what is the average size of each memo field and does you code keep modifying the memos?

I can't speak for VFP 8, I'm on 5/6 waiting for the retail version of 9 to upgrade!

Regards

Griff
Keep [Smile]ing
 
The table we are looking at has 171,000 records and there are 13 memo fields per record, though the vast majority of them contain no data. The average size will vary depending on how the user is using the memo fields and can range from a couple of words to a page of notes - I would say that in most cases, data will be no more than 20 words, and, as I say, this would only be on a small subset of the 171,000 records.

The code does modify a number of the memos, and we're looking into that now. In theory, we don't edit anything enough to introduce the amount of bloat we are seeing. However, if, each time a memo is edited, a small amount of bloat is introduced, perhaps our code edits a memo field within a loop that runs too long and this magnifies the "bloat" to the proportions we are seeing.
 
Let me test that theory for a moment, I'll make a quick dbf/dbt and mod the memo a few times...

Regards

Griff
Keep [Smile]ing
 
I made a single record dbf, and looped through it 10,000 times replacing the memo field with a space, then a long string, then a single space again.

I've done this a few times... now my .fpt file is 61MB

Still just one record and a very short memo... but a LOT of wasted space!

Regards

Griff
Keep [Smile]ing
 

Memo fields are a bit like Excel workbooks. Create a workbook and add 1 line to the first worksheet and you are left with a "big empty box" with one line of text. It might be time to re-think the use of memos fields.

Mike Gagnon

If you want to get the best response to a question, please check out FAQ184-2483 first.
 
We've been experimenting and getting exactly the same results Griff has seen - carrying out a lot of edits on Memo fields drastically increases the size of the Memo file.

Thanks to everybody for their help, we've learnt some stuff about memo fields...
 
I THINK it might be worth varying your 'block size' setting and recreating your table with the memo fields. It'll be wasteful initially - but worth it after a few edits!

Regards

Griff
Keep [Smile]ing
 
may also help reducing the memo file by PACKing the table once in a while.

hope this helps. peace! [peace]

kilroy [trooper]
philippines

"If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get one million miles to the gallon, and explode once a year, killing everyone inside."
 
A PACK does reduce the size of the Memo file by a huge amount - we have a reindex and pack utility in our app, and recommend that users run it regularly, but more often than not they seem to ignore the advice. Horses and water and all that... In any event, we know now that a PACK will stop the error from occurring again for some time at least.

With regards to the 'block size', we did consider changing it yesterday, but it's set at 64. From what I understand, this should be comparatively efficient - more efficient that having it set at 0. Is that right, or have I misunderstood? Given that it would be a pretty big job for us to get all our user's tables recreated, we decided that, on balance, it would be best to leave the block size at 64 and continue looking at other options.

At the moment, whilst it seems pretty clear that editing memos can drastically affect the size of the Memo file, I am still struggling to envisage our users editing enough to create a Memo file that is 2Gb in size. As I say, we followed Griff's lead and wrote a quick prg to expose the behaviour, but that edited 13 memo fields several times against 190,000 records. We had to run the prg several times before our Memo file hit the 2Gb limit.

I just can't see our users manually editing on that scale, so the line we're going down at the moment is that there is a mistake somewhere in our code that programmatically edits the Memo file repeatedly...

 
I didn't try it, but I suspect that even replacing the value in a memo field (without changing it) if it's bigger than the block size may well lead to this level of redundancy/wastage... the .fpt format is far from perfect!

Regards

Griff
Keep [Smile]ing
 
That's our suspicion, too. If we get anything concrete, I'll let you know.
 
That suspicion is correct. It's explained in my article that I referred to earlier in this thread.

Craig Berntson
MCSD, Visual FoxPro MVP, Author, CrysDev: A Developer's Guide to Integrating Crystal Reports"
 
jjjt,

I suggests your normalize your database.

Bren
System Analyst - Philippines
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top