Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations derfloh on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Live table back up 2

Status
Not open for further replies.
Hi Keith,

Windows won't copy a file while it's open. In other words, you can't do COPY FILE in VFP. But there's nothing stopping you doing something like this:

SELECT * FROM My_Open_Table INTO TABLE My_Backup_Copy

Keep in mind that this will only copy the data. It won't copy any indexes, and it won't preserve long fieldnames or other DBC properties. There are ways of doing that, if that's what you want.

Does this help at all?

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro tips, advice, training, consultancy
Custom software for your business
 
I am creating the copy table, creating a zip file from it and then I need to delete the created table. How do I get round the 'file in use' scenario?
Code:
[red]DELETE FILE Z:\BAK\ART_DAY.DBF[/red]

Keith
 
Keith


fwiw you can copy a file that is open by using the windows api....

DECLARE INTEGER CopyFile IN kernel32;
STRING lpExistingFileName,;
STRING lpNewFileName,;
INTEGER bFailIfExists

=copyfile(dbf(),"tablecopy.dbf",0)


but Mike's SELECT INTO TABLE is probably the better option here because the copy will keep any pointers to a DBC so leading to backlink issues.

But be aware that it will truncate any long field names.



nigel
 
Select into table does the job very well.
Long field names are not an issue as being from a DOS background, I never use field names with more than 8 characters :)

Keith
 
Chris,

try it.

So long as you don't have the table open excusive the copyfile api will copy it.

nigel
 
Chris,

A use of an application with all or even only some tables opened exclusive doesn't sound sound. It would only be working in a single user app or in maintainance for ALTERING, PACK, REINDEX. And only for the maintainance tasks it's needed.

But for a hot backup it's surely not working anyhow, neither COPY FILE nor the API CopyFile get access. Exclusive means excluive, noone else can even read the file.

Bye, Olaf.
 
Olaf

I use EXCLUSIVE in single user apps so that maintenance is inbuilt, automatic, and shielded from the user.

In such situations there is therefore a disadvantage in having SHARED tables and automatic maintenance has a greater priority to me than facilitating a live backup.

FAQ184-2483​
Chris [pc2]
PDFcommander.com
motrac.co.uk
 
Chris,

for a single user app that's a valid approach, as you also keep out virus scanners, and virus scanning a dbf does not make much sense, as long as it's really a dbf ther are few bytes that could be occupied by a virus and not be detected as either a header defect or invalid data.

I also think, that an external live backup always lacks one thing: You never know about pending write operations in buffers. Doing a backup from inside the application itself is easier in that aspect, as you can do it, when everything is saved from table and view buffers.

I'd rather recommend creating a backup database via CREATE DATABASE ...backup and using COPY TO .... DATABASE backup instead of INSERT INTO TABLE, as it can do two things: a) copy with long field names and via the WITH CDX option you can also save indexes. Yes, you can recreate them, but you need to know which indexes exist, so you need at least to store some more meta data about the tables, as that expression is stored in the cdx file itself and if that is corrupt you don't get at it to repair the cdx.

That said it's of course a good idea to store some meta data about the database, tables, views, indees, etc. One easy ting to do this is o create a program able to reproduce the database structure: GENDBC does create such a program.

@audiopro: I won't be too proud in only using short names and dos style tables, because of coming from way back there and being used to it. VFP tables and DBCs have great features. You throw away several good things, if you don't make use of that, and even if you use short names in dbc tables and make use of referential integrity, efault valuesand such, the one valid reason for long names is they better self document what the field is storing.

I for example, am a big fan of long speaking table and field names. As the best project documentation tends to be the program itself, it's the only thing that is non optional to change, if you change the code, add a feature etc., you can of course also document what your short names mean, but you need to be very diligent to keep that in sync with data and code.

Bye, Olaf.
 
sorry, my keyboard seems to have swallowed quite some key strokes, quite a lot for a single post.

Bye, Olaf.
 
Thanks Olaf, I didn't mean to dismiss long field names as useless but I do very little with VFP these days and feel it is better to stick with what I know in order to keep these long established apps running.

My client asked me to add a few extra features to an app which I wrote many years ago and has since been in daily use. I needed to get up to date copies of his data and the files could not be emailed as they were over the ISP's limit. I set up an FTP link so he could put the data onto the web server and I could get it from there and I hit on the idea that it would be useful to be able to backup to the web server in addition to his scheduled backups each evening.

After discussing it with him, we have decided to do it as a stand alone app which will back up all the tables, indexes and .fpt files.

I know I have nmoved the goal posts so, with the restrictions of live backups removed, is Copy File for the .dbf, .cdx and .fpt files a reliable method or is there a better way to do it

Keith
 
Well, there are no hidden bits or alternate streams in dbf files, so COPY FILE is fine for copying foxpro data as a backup. It's just that "live", during working hours, means while you copy one file of the dbf, eg the .dbf file , due to inserts and updates, things can change, eg if you copy the .cdx file after the .dbf there might be some rows missing from the cdx in backup, and that is hard to detect in a restore. So it's always better to not do live backup this way, but use a better approach as COPY TO, which creates a copy of DBF, CDX and FPT at once and so keeps these three in sync at least.

There is still the simialr problem of parent/child tables. When backing up parent tables first, you might get child records in the backup child tables, that don't reference a parent record in the parent backup table, if those records were added intermediately during backup.

That's what's so difficult to do in the live backup, you rather can do it from inside the app, not from outside, inside you could eg hold all data changes by a setting during backup, from outside you can't see what's pending in buffers, and that is even a problem of backup in multiuser environments, as you only can know and see the buffers of one of the clients.

The round it up, I may point you to a software named "Live backup" done in foxpro. It's not solving the problems I pointed out, but you may use it for a start anyway:


Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top