Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Re-issuing an application - Checking for table updates.

Status
Not open for further replies.

AndrewMozley

Programmer
Oct 15, 2005
621
1
18
GB
There is an application which mainly consists of a startup program, several function libraries, a menu system, a set of forms and reports, and a database made up of several tables.

The startup program, forms, function libraries and menus are bound together into an executable (say) myapp.exe. The report layouts are not bound into myapp.exe, but issued as a folder (say) appreports which is available on the path at run-time. The reason for this is that several of the reports can be customised by the user; these modified report layouts are available in a folder locreports which is also available in the path at run time.

Re-issuing the system.

As time goes by errors are discovered in the system and new facilities may be added. The system can be issued to users (in other locations). Myapp.exe and appreports are re-issued.

We also need to ensure that the layout of the user’s tables is up-to-date with this release. Maybe new fields have been added to some of the tables, or the length of some fields may have been changed. At present this is handled by re-issuing the updated .dbc database container and skeletal tables; there is then a selection in the menu : updatedata.scx which compares the layout of the new skeletal tables with the user’s tables, and updates the user’s tables as necessary. This process could perhaps be handled by checking the user’s data files against the database container?

One requirement which I need to cope with is to handle the case where the indexes in any of the .cdx files has changed : A new index may have been added, or an existing key may have changed. In the present case a filter - !Deleted() - has been added to an existing index.

What would be the best way of handling such changes to an application, at the time it is being re-issued to the user? I wonder whether the database container (.dbc, .dct, .dcx) contains information which can be read to help handle the changes to each relevant table in the user’s data folder.

Some sample code would be very helpful.

Thanks. Andrew
 
Open the DBC as a table: USE yourdatabase.dbc
Then you can see what is in the DBC about the tables: Long field names (which are not stored in the DBF headers), index defintions (partly redundant to what is in the CDX header) and some more things.

In short, you have data about the tables, meta-data. The exact details are not important, but if you replace a DBC/DCT/DCX with the set of files of your developer database, that is not giving you a valid version update of the DBC. You get problems with the DBFs.

You can only replace the DBC/DCT/DCX files after you altered the DBFs (and their FPT and CDX files and what else belongs to them). And the irony of it is, if you do that, you already have updated the DBC, etc. So that's one way of updating your application database: Running an SQL script of all necessary ALTER TABLE statements to change table structures, accompanied perhaps by a few CREATE TABLE to create new tables, some INDEX commands, etc

The crux is you have done all these steps without these commands, usally, you opened your tables and modified tham, etc., so you have no recording of what script should run.

The way to get DBC and DBFS in sync is you overwrite everything with your development databse. Well, that's a joke, obviously, because then you overwrite the application data the users created with their application usage. That's the actual hard problem, you can't solve a database update only by copying and overwriting files.

This is like a heart operation. What's often describes as the simple solution is add a temporary folder with new DBC and empty tables, then append all data into it from the old tables. That still has some issues. If you remove a field for example, the append might still work and just drop the one fields' data and shift the rest by appending to same field names. Changes can involve complicated structural changes, which are not just solved by appending. You're effectively asking to do a data migration, just staying within the same database system.

Every time you do a change in development you don't note down precisely to be repeated in the life system, you're adding to your so-called technical debt, as you stack up changes that you might easily forget what they were exactly and still have to apply to other peoples data, which is not the same as your data. Just to describe one possible change in detail: You can't add a field and also add it for the customer by copying your DBF, that does add that field, true, but at the same time it overwrites his data with yours. You have to do the same modification of adding that field to the users database table.

You ask for a tool that can solve all your technical debts, don't you? Well, the one I know is Stonefield Database Toolkit. That can take one DBC and another and create a script of SQL that does the alterations, but will it work for any situation? I have to disappoint you, it's also not magic. You always need to plan ahead and otherwise will work double as hard to resolve your technical debts you accumulated. What I can guarantee you is that this tool will find structural differences of tables with the same name in DBC version 1 and version 2, and that can be used to adjust them. That's not the hard problem, though. It becomes more problematic if you have three tables which data previously was in two tables, for example. So you can get good help from tools, but either you need to be lucky your changes are wwithin bounds of easily recognizable as to what to do to each table or you only have a partial solution. Even in the simple cases where one new field is added, what has to be put into this new field? Will it be null for all existing records and only be filled in, when new data is entered? Or do all old records get a default value there, or a value computed from other fields from the table? Those things should be on your mind to note whenever you do your changes, at that moment, when you decide it for your development database. Because its logical that you have to make the same changes to a live database of an application in production.

Just some thoughts on that. Your situation might not be that complicated at all, that you can solve it with the approach to add a new empty database with empty tables and copy over all data from the old tables into the new tables by append. I just warn you that many things may work, but problems might only be detected much later, when the new datbase already contains lots of new daata important to your customers, but old data is missing. And then you have to do patch work.

Chriss
 
By the way, Stonefield Database Toolkit could have done the planning ahead for you, if you already would have had it, while you developed your application. Because it can implement database events and triggers that record what fields you added or removed and would be able to bring up a database upgrade script that reflects each structural change you did, also in the order you did them, which also can be important. It doesn't even forget to record changes done to views.

As said already, the toolkit can also compare two DBC versions and turn up what changed, but then it isn't able to see in which order changes were made and treat the changes as done in parallel.

Chriss
 
Thank you for your replies, Chriss. I was aware of the Stonefield Database Toolkit, although it was not used in the development of this application.
 
Well, then you have the two major options about creating a parallel directory with an empty database, to create that you can copy your development database and ZAP every table.
Or you get the Toolkit and let it create an updatesccript from your database and a database as it is in production.

By all means test the upgrade process and then you have to figure out, if there are essential differences to your working development database, not only by structural differences in tables, there likely won't be any, but also because the upgrade process does not fill data into new fields or new tables, which are essential for the application to work. You might detect some problems by testing the application itself with the upgraded data, but some missing things can lurk in seldom used forms or features. It's a good approach to check whether columns are all emopty, while they are not all empty in the up-to-date development database.

Otherwise, an upgrade can simply be the overwriting of files, by the way. In very simple situations, where the actual application is only an EXE and perhaps a config file in the form of ini, xml or a dbf, then it's mainly just overwriting the EXE.

You talk of a locrepot folder, that would, of coure, need to be untouched. In general you have to protect what should not be overwritten, for example by moving it elsewhere, upgrading, and then putting it back.

That's the case, when you use a mirroring of a working application directory to the installation. That will have an empty locreports directory, and the mirror operation would delete all the modified reports the customer made in their locreports subfolder. So copy it to a directory parallel to the installation directory or into a folder in TEMP. Then mirror the application and move the locreports directory back to where it was.

Instead of mirroring you could choose a "copy if newer" option, too, then there is no deletion of files in the installation directory you don't have in the update source directory. If there is anything that needs to be purged, you could do that as aftermath of the update.

Of course, the option to overwrite what's newer does not work for the database, I assume the database is in a separate directory somewhere users have network access, and how to upgrade that already was the topic of my previous answer.

And then, your update could be done by a setup.exe, if you use installshiled or other products. If you saved and kept your project for creating the initial setup, it has options to create a setup with update or "repair" option. You can "teach" it what to do in case the appication is already installed as a repair/update operation. That differs by what you use to create a setup.

Chriss
 
Hi Andrew

I put a table in each application, which holds parameters, one of which is the database version - it's the same table in
all my apps xxxPM.dbf, were xxx are an acroynm for the app (FCT for Finecosts, WRK for a workshop app etc).

The first record, physically, in there is the database version and that is a simple string that needs to match the version
within the application .exe. So if the .exe is version 1.60 P then the database needs to be 1.60 too, the sub version P
allows me to make mods to the exe without updating the structure of the database and using the numbering system also
stops people with old versions of the exe from accessing the data (exe ver 1.40 cannot access a database of version 1.60).

When the .exe opens the table and reads the database version and it is out of date, I present a form and show the progress
of updating the database - adding tables, fields, indexes, processing records all that good stuff from one version to the next
until it is updated properly.

The code to do this looks like this:

Code:
SET EXCLUSIVE ON
CLOSE DATA ALL
OPEN DATABASE (CHANGE_DIR+SYSTEMNAME+'.DBC') EXCLUSIVE
THISFORM.MESSAGE.VALUE = "Updating Database Structure from "+TRIM(m.EXISTING)+" to "+m.VERSION
IF EMPTY(m.EXISTING)
	m.EXISTING = "1.01"
	=UPDATE_STPM(m.EXISTING)
ENDIF
IF m.EXISTING = "1.01"
	THISFORM.EDIT1.VALUE = THISFORM.EDIT1.VALUE + "Updating structure "+ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))+CHR(13)
	USE (m.DATALOCATION+"BILLDEFA")
	IF TYPE("BILLDEFA.INVOICE") = "U"
		USE
		ALTER TABLE BILLDEFA ADD COLUMN INVOICE C(2) NOT NULL
		SELECT BILLDEFA
		GO TOP
		IF EOF()
			APPEND BLANK
		ENDIF
		REPLACE INVOICE WITH "01"
	ENDIF
	USE
	m.EXISTING = ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))
	=UPDATE_STPM(m.EXISTING)
ENDIF
IF m.EXISTING = "1.02"
	THISFORM.EDIT1.VALUE = THISFORM.EDIT1.VALUE + "Updating structure "+ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))+CHR(13)
	USE (m.DATALOCATION+"INVOICE")
	IF TYPE("INVOICE.FLAG") = "C"
		ALTER TABLE INVOICE DROP COLUMN FLAG
		ALTER TABLE INVOICE ADD COLUMN FLAG L NOT NULL
	ENDIF
	USE
	m.EXISTING = ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))
	=UPDATE_STPM(m.EXISTING)
ENDIF
IF m.EXISTING = "1.03"
	THISFORM.EDIT1.VALUE = THISFORM.EDIT1.VALUE + "Updating structure "+ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))+CHR(13)
	IF !MYFILE(CHANGE_DIR+'BANKDETL.DBF')
		CREATE TABLE 'BANKDETL.DBF' NAME 'BANKDETL' (DATE D NOT NULL, ;
			CHEQUE_NO	C(10) 	NOT NULL, ;
			DESCRIP 	C(30) 	NOT NULL, ;
			AMOUNT  	N(14,2)	NOT NULL, ;
			C_D		C(1) 	NOT NULL, ;
			STATUS 		C(1) 	NOT NULL, ;
			FLAG 		L 	NOT NULL)
		***** Create each index for FCTDDET *****
		INDEX ON DTOS(DATE)+CHEQUE_NO DESCENDING TAG BANKD01 COLLATE 'MACHINE'
	ENDIF
	USE
	m.EXISTING = ALLTRIM(STR(VAL(m.EXISTING)+.01,5,2))
	=UPDATE_STPM(m.EXISTING)
ENDIF

When the database is initially created the version is set to 1.00, and it is then updated through all the intervening versions to the version of the .exe


Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

There is no place like G28 X0 Y0 Z0
 
I think Griffs approach is good. As the scripts needed for the versions could be much longer, I'd put them into a memo field next to their version number and execscript that, for example, that would make the code slimmer, too. But the general principle is as I gave as the best alternative: Have SQL altering tables and anything else needed. And the best moment to write these statements is whenever you do something manually for your development version, because then you have that focus and know what you do or just did.

Chriss.
 
I like it because I can see how I did things earlier, and of course it's self validating - I add/drop/alter fields using this routine in development
so there is no chance I miss something in the production environment, no manual changes that I might miss/forget/misspell...



Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

There is no place like G28 X0 Y0 Z0
 
Thank you for your help Martin - I have been working rather slowly . . . thanks particularly for your guidance in checking whether the filters are correct (the same as those in a set of model tables) for each table in the live database. That was my original question.

SYS(2021) seems to be the way that one can check filters.

For any particular table, the application already has a function which updates the fields in the table to be the same as those in the model. This function (below) updates the index(es) for any table to be the same – including filter – as those of the model table :

Code:
*  FixOneTableIndex()  Check the indexes for currently open table 'LiveData'
*                      and make them match with currently open table 'Model'
LOCAL lChanged, lIdx, lFound, lKeyModel, lFiltModel, lTag, lExpr, aTags
WITH Thisform
   DIMENSION aTags[1]
   SELECT LiveData
   ATAGINFO(aTags)
   lIdx = 1
   lChanged = 0
   DO WHILE .T.
      lKeyModel = KEY(lIdx,"Model")
      lTagModel = TAG(lIdx,"Model")
      IF EMPTY(lKeyModel)
         EXIT
         ENDIF
      SELECT Model
      lTag = TAG(lIdx)
      *  The last 8 says 'Return the row number'.
      lFound = ASCAN(aTags, lTagModel, 1, -1, 1, 8)
      
      lFiltModel =  SYS(2021,lIdx,"Model")
      lExpr = "INDEX ON " + lKeyModel + " TAG " +  lTag
      IF !EMPTY(lFiltModel)
         lExpr = lExpr + " FOR " + lFiltModel
         ENDIF
      IF lFound =  0 
         SELECT LiveData
         &lExpr
         .sProgress = .sProgress + "New tag " + lTag + ;
               " created for key '" + lKeyModel + "'" + CHR(13) + CHR(10)
         lChanged = lChanged + 1
         lIdx = lIdx + 1
         LOOP
         ENDIF
         
      *  Check that the tag exists in the Live data table and
      *  that the key is identical to the one in the model.
      lFiltLive = SYS(2021,lFound,"LiveData")
      IF !(lFiltModel == lFiltLive)
         SELECT LiveData
         &lExpr
         .sProgress = .sProgress + "Index tag " + lTag + " updated." + CHR(13)
         lChanged = lChanged + 1
         ENDIF
      lIdx = lIdx + 1
      ENDDO
   ENDWITH
RETURN lChanged > 0

It can almost certainly be improved!

Thank you for your help. Andrew

 
So you have filtered indexes, where the filter condition changes in development vs. live database?
Not sure if Stonefield database toolkit would synchronize that, though filtered indexes are not a new feature.

But regarding table indexes you can always:

1. Copy a table including CDX to another table name
2. zap that copy
3. take the now emptied CDX file and rename it for the original table
4. use that empty CDX and REINDEX to rebuild the CDX content.

The DBC has records of indexes, too, that would need to be changed, but only when the tag name changes or you introduce new indexes or delete tags. A filter condition on an index tag isn't stored within the DBC so a filter condition change only is within the CDX file in the header part. Which you get from ZAP.

And this works disregarding warnings about REINDEX failing when you have a broken CDX header. You don't have it, since you took it from a healthy development table. You should of course test that this works, but then you can also be sure it works for whatever data is in the live database, too. Any ALTER TABLE has to be done first, of course.

There's just one thing to avoid, an index on a non-deterministic expression. DATETIME(), for example. It will not retain the original DATETIME() of the creation of a record, but that's not only a problem if you use REINDEX but also, if you rebuild indexes with INDEX ON commands. So, such non-deterministic indexes need to be avoided.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top