I imagine this rather being about creating tables per customer or per month, a system you wouldn't need to make that complicated for any reason. For the reason of file size limits? Perhaps in the long run, but not per month. And storing data per customer or client or tenant is also just one aspect of data privacy.
Anyway, if there are 1000s of places something like that is done, a function doing so, of course, doesn't resolve the need to place 1000 calls of the new function. So the burden of this task is much more of an overall deal with never upgrading the VFP version for 20 years and not regularly enhancing and refactoring code making use of new features. Edit: I assume such a system is worked on all the time, I don't assume it is kept as is just because 1999 we already had VFP7 2001, development simply continued with VFP6. But that alone means not profiting of newer features.
There's a term for it: Technical debt. That's more about bad code design, independent on the version of a programming language used, but updates of the language of course play into it, as new language versions give new features leading to shorter easier to maintain code. But at some point, your technical debt either becomes much work that could have been a relaxed side task in the past or even the end of your business.
Assumption: The structural copies of tables vary so much in how they are to be indexed, that he didn't bother writing a function. But COPY STRU WITH CDX or COPY TO WITH CDX actually copy all indexes you already once defined in the original table, so there is no need for individual code when that is the only goal. That means you don't even need individual INDEX ON and you can shrink down much code and replace it with a single line even not writing a function. I'd still do a function, as it would provide a way to centralize the organization of files in folders to just one PRG or even class. That means alone the concept to let a class be responsible for when, how and where tables are copied means you also will only need to change that in one place in the future. A big strategy change could be simply maintaining a directory of empty tables you simple copy per month, the only part of names that will change are then names of the target directory containing the year and month or directories with client names, then your code doesn't even need to copy table structures on the fly and just in time. While just in time solutions can be fine for some problems, if that leads to 1000 places in code you do something, that's not really a good design, is it?
No matter what the detail reasons are, the idea is not just to centralize a "macro" of 5-10 lines to a function, but to consider why you would need something like that scattered at 1000 places in code at all and if this can't be centralised and even done ar application start once per month, for example, for the whole database or folder of tables and not just a single table. A directory of empty indexed tables you only use to copy them for your client/trenent/time period use will be much easier to maintain and your code handling this data will just need to CD into the right directory to work on that data "partition".
Bye, Olaf.
Olaf Doschke Software Engineering