Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

File is in user by another user while opening a database

Status
Not open for further replies.

KALP1

Programmer
Aug 26, 2016
102
IN
When my 2 users simultaneously use 1 option which issues Open Database xyz.dbc , then 1 user gets error "file is in use by another user".
What might be causing this error as I am not any dbf file of that database exclusive. Set Exclusive is also OFF.
 
Hi , I had the same error on a Multithread project.
Set validate to 6 and it will be fine.


Marco Plaza
@vfp2nofox
 
I would perhaps conclude with only setting the suppression bit on and reccount validation bit off, when opening a static table, like a DBC is.

So set TABLEVALIDATE to 14 when opening databases and set it to 11 for normal use of tables, unless you're sure the suppression of "file is in use" errors is okay for any reason. In general, you have a problem when ignoring it and still modifying the data. In the simplest case you don't really validate the reccount when not locking it, or you get a "false positive" when VFP reads reccount and compares to file length in the wrong moment, when another user currently adds to the DBF.

On the other side, we didn't have tablevalidation at all before VFP8 and you might turn it off overall, which would mean set it to 4. Actually 0, but with 0 you get error 108.

Bye, Olaf.
 
Thanks Olaf,Danfreeman,mplaza for getting my problem solved.
 
Olaf, If I don't close old databases and open new one , will there be some problem in it.
 
What are you referring to? Where did I talk about old and new databases?
 
If you don't close databases and give command to open same one again , there will be not any problem in it.
 
That's true, I only got errors after adding CLOSE DATABASES. But closing it, doesn't make it old and when using Parallelfox and exiting these addfitional processes you also close databases, even just by exiting. The problem occurs because of frequent opening of just before closed databases. It doesn't need the close database command, it needs the closing and opening of databases. You can't get around it using multithreading or multiprocessing. A new process or thread needs a datbase opened and it closes databases and any files, when exiting, it can't a database already open, just because the same code already ran, you have the init overhead with each new parallelfox process, that should be clear.

If you exit my test code instead of doing an endless loop and then repeatedly start this code as EXE you get the same effect of opening and closing dbc files in high speed and frequency, also without the explicit CLOSE DATABASES command. Since at least Vista we do have latency and timing problems with the file system, for example google how many people have had problems with PACK of data. Same phenomenon.

If you would like to keep databases open, your new parallel processes would need to stay running and shouldn't need to spawn over and over again, the overhead of starting a new VFP with each record of a parent table seems quite like a brake of the acceleration you aim for, it would perhaps be best to spawn just as many processes as you have CPU cores (including hyperthreading cores that can mean 2x physical number of cores), you don't win from running more parallel threads or processes. Then let each process not only process one record, that's not making a big gain about the overall performance. Chunk your data into blocks you assign to each process. Measure running time. If each new parallel process only runs split seconds the initialisation time of it surely has a bad impact on the overall processing and all you get is much more complicated code, not a performance gain. The overhead is not only about OPEN DATABASE, it's about creating a process, another instance of the runtime, allocate memory, opening or even creating a foxuser.dbf etc. - all things done before even the first line of your code runs.

Bye, Olaf.
 
Very True Olaf,it was my experience that Parallel processing took more time than serial processing. Now I got to know reason behind it. In documentation,it was specified to use as many workers as no. of CPU Core. Will check out diff. scenarios as per your analysis. Does Parallelfox also slows down the main thread which initiated it as no. of workers more than CPU core. Sorry If I am going away from the topic.

Well I used Parallel Fox differently to initiate some processing in background and allowed user to continue his work. When processing gets complete,User is notified of the same.
 
KALP1 said:
When program to consolidate units is run which scans list of units (each may share same or diff. database depending on the architecture) say 100 nos. causes this error. Each scan in turn calls assign.prg which opens databases . Since I am scanning records time lag b/w 2 calls to assign.prg is in microsecs.

From that description I concluded you are using ParallelFoxc to create many new processes.

Why the need to consolidate data? Needing to do such thing points out you have an unusual situation, split data in multiple databases, offline/distributed data. What should a parallel process help about that. Such data consolidations shouldn't be needed live, if you have data that should be same for all business units in the world it better is in one central place or synced. Then it better is in SQL Server than in DBCs.

Bye, Olaf.
 
Requirement for multiple databases arouse as some units had large data which were adversely affecting processing speed in small units.
 
When you hit limits of foxpro, both in performance and table size, it's time to go MSSQL Server. While VFP access is fast for small amounts of data, it's not so, when table sizes grow. Therefore the wish for >2GB dbfs is quite useless anyway. When I did the move to SLL Server for a database growing to 14GB overall, the migration caused a performance increase of about double speed, without much effort in learning special MSSQL performance secrets. Mere normal tables with a few indexes and straight forward queries via Cursoradapter, no gazillions of stored procs or views, so the whole application logic stayed in VFP.

Bye, Olaf.
 
Olaf,

I am thinking seriously to move to MSSQL Server.Is it possible that certain modules of my application use SQLServer as backend while other have VFP database. Gradually I will shift my all modules but that would take time.So I want to start shifting to Sql server 1 module at a time.Can You guide me for the same as to how to proceed furhter.
 
KALP1,

whether you can move module wise depends on the relations between data. Because you can't make a query joining a DBF and MSSQL table data. You can of course query MSSQL data into cursors and then join DBF data to it, because of VFPs capability to query cursors or because cursors really are in memory DBFs, but it often enough means a worst case of joining data, you don't profit from the join optimization of only querying data finding an inner join, for example.

More general, I can't really help you with experiences about partial migrations, I didn't do it that way. You can have a fast start (in regard of usin MSSQL server) by adding DBCs via linked server, but that wouldn't profit from MSSQL at all, you just make DBF access much more complicated this way, routing your queries through MSSQL, which in turn uses DBCs/DBFs via VFP ODBC driver or OLEDB provider.

The database and application I moved to MSSQL was using a 3 tier architecture, i.e. it was querying DBFs, not USEing them or having them in data environments, even before tha moved to MSSQL. The normal case for VFP applications means you would need to start replacing the USE of a DBF or having it in the data environment with a query SELECT * FROM mssqltable and thus read in all data. This is obviously not scaling well.

So not having the right architecture of data access in the first place, migration means a major rewrite. Whether that can be done step by step depends on whether you can really partition your application into modules working independently each on different and own data. If you say, yes, I have a large ERP system with eg materials management, production planning, demand planning etc., you have an overlap of table uses, which would need special handling (i.e. interfaces or syncing of data), but could take it step by step.

One problem is, the critical factor of your current application and database landscape will be with most data and most traffic and that's not where you would start doing a MSSQL conversion without any previous experience.

Bye, Olaf.
 
Thanks Olaf,
I will drop idea of partial shifting. By Partial I just meant that will my old modules would still work with some modules using SQL server. If I want to shift to SQL Server(not partial),then How to start.
 
Looking back at the situation you had described (many business units with each same database, differing sizes of their local data) I recommended using MSSQL instead of consolidation of commonly used data, i.e. stem data. You may make a distinction between this data you use globally and other data only needed locally, that would be a partitioning of the data making sense, you separate the globally used data only and put it in MSSQL.

A customer I will leave January 2017 does have not only this part but all data of a reasearch&development module in one head quarter on premise SQL Server database (not the azure cloud) and has regions use terminal server to use the application and data at head quarter. So there is no need for replication and/or consolidaiton of data. I would still prefer replication with each region, but their internal test results stated it wouldn't work. I strongly assume without having the full insight, they failed on a mixture of false assumptions/interpolation of sclaing needs, bad setup, missing knowledge about the database design. For example many parts of the data are indeed not needed global, even in the head quarter some of it only is relevant to a single user, his projects, his formulas. And so a setup of a general full database replication was replicating way too much data, they also didn't make use of SSL tunneled ast internet connections but private data lines, which are much more expensive for same bandwidth and only have the advantage of guarantee of bandwidth, but are not more safe than SSL tunnels can also be.

Anyway, what data do you consolidate and why, if you answer that question (for yourself) in detail you knwo which part of the database should be centralized first. You have to decide whether it would be okay if each local unit has slower access to this data, when you centralize it, but still fast or even faster access for remaining on premise data. How fast does this data change, for example. If it's your product inventory, do you really need the knowledge of europe inventory in usa? If it's the products you sell, would it be ok to have a central single products table and each unit downloads and merges it into local data once a day only? Such ideas will lower the needs for replication bandwidth a lot better than blindly replicating all data, for example.

What you should see from these examples is, much broader and general thoughts on needs for which parts of data is needed now, before going into technical details. Whether the construct I have to maintain now pays - with one main business unit having the full data local and regional units connecting via terminal server - whether that pays also depends on how large business units are. The regional units put together still only are a split number <10% of total users, so the central unit has most data access. This wouldn't work out for all units being símilarly sized.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top