Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to speed up Access after a BE/FE split? 2

Status
Not open for further replies.

MrDeveloper

Programmer
Aug 1, 2004
40
US
Hi,
I have recently split out my application into a Front-End MDE app / Back-End MDB database. Forms now distributed to users PC's on the network. Each desktop shortcut now points to local copy of Access, network copy of the security (MDW) file and then local copy of the forms (MDE).

System loads ok but now runs terribly slow (upto 9 minutes to bring back a record in some instances!). The delay in record-accessing prior to splitting used to be a matter of seconds when it was an all-in-one MDB accessed entirely from the server by networked users.

I have trawled the web for hints/tips and have come up with the following 5 improvements (below) and for the benefit of other users experiencing similar problems, I will post to this thread whether any of them helped when I try them.

However, before I go in to try and solve the above problem, does anyone have any further suggestions as to things I could do? I am concerned the following may only make minor improvements to the delays.

Any further help/suggestions would be really appreciated.

Thanks in advance,

List so far:
1. Shorten the back-end database (MDB) file-name to 5 characters.
2. Move it near to the root of the server.
3. Move the back-end MDB to a faster server.
4. Turn 'Track Name AutoCorrect info' off
5. Set subdatasheet name property of tables to [none]

Anything else I could add to this list when I try fixing the system?
 
You seem to have done two things at once - split the application and turn it into an executable. If you just split it (ie not create MDEs) do you get the same problem? I wouldn't compile things until everything else is working. I'm not saying you did this but you can get problems where people compile (in this case the queries) on a small test database and then run the same code on a large database. The optimisations are all wrong but the DBMS can't do anything about it.

 
Hi,
I did indeed compile it as well as split it, whereas prior to the split it was an uncompiled (MDB) system with both forms, queries and tables in one file sitting on the server.

The reasoning behind compiling was for security of the forms and to prevent people modifying their copy of the forms and/or queries once it had been distributed to clients.

I have not tried splitting without compiling but if this could make a difference to speed, then I shall try it along with the suggestions above. I thought MDE's were faster to load though?

Is this generally the 'done thing' when splitting to BE/FE (i.e. maintaining both FE and BE as MDB's rather than compiled MDE's?).
Would be really helpful to hear what approach others have taken...

Many thanks,
 
I lock down the Front End using User/Group security so the forms etc can't be changed by the users, and I secure the code so they can't see that either - Yet I can get into any copy I choose by logging in as Administrator.


However,
What Mike was saying is, don't do the two 'at the same time'.
He's not saying don't do the two at all.

Split FE/BE
Then do more testing
and once you're happy with that - then compile if you want to.

I develop from day one in a split FE/BE so it's never an issue.


'ope-that-'elps.




G LS
spsinkNOJUNK@yahoo.co.uk
Remove the NOJUNK to use.
 
Thank you both for your replies.

Will now try the suggestion and re-install it as uncompiled back-end and front-end and test the record-retrieval after this.

Will post to the thread to describe which change helped the most as I have a feeling this is a common problem experienced by developers of networked applications.

Many thanks,

MrD
 
And some more food for thought...

When you used one DB instead of teh FE/BE, were you running the DB locally or on the server? If locally, then there is no comparison.

Next, Access is pretty ugly when it comes to networks. When you create a form based on a table, Access will retrieve ALL records for the table. This can create a lot of network traffic, especially if you have large tables.

Consider restricting the amount of data brought across the network when you load / open a table. Strategies could include...
- Reduce the RecordSource for the form to a minimum such as the primary key, and perhaps a few key fields. Then use the OnCurrent event to retrieve the rest of the information for the specific record.
- Add combo or list boxes to search for the required information on the form. Instead of retrieving the entire table, the end user has to select a specific record or a subset of records. A trick here is to try to find and fail to find a record when the form is opened or loaded. For example, the primary key = 0. This loads the form with no records.
- Tables with static information such as States and Provences can be stored within the FE.
- Avoid the use of objects such as graphics and memo fields within the database, or hide them on a second page on the form.
- Indexing is important. The first field in a SELECT query should be indexed. Commonly used fields in WHERE clauses should be indexed.

There is more I am sure.
 
The first field in a SELECT query should be indexed
?????
 
Hi PHV - late night again?

Am I wrong about the first field / column?

Past life...
We had a registration database with a QBE (query by example). A common way of checking for a previous entry was to query for the phone number. First few times the query was run, it took forever ... because the phone number field was not indexed. Once the field was indexed the queries ran as expected.

Your thoughts??
 
It was because the phone number field was in the WHERE clause, not because it was the first in the select list.

Hope This Helps, PH.
Want to get great answers to your Tek-Tips questions? Have a look at FAQ219-2884 or FAQ222-2244
 
Time for another cup of coffee -- dahh.

So you are saying there is no difference between

SELECT ContactID, LastName, FirstName...

Where ContactID is indexed
vs

SELECT LastName, FirstName, ContactID
Where LastName is not indexed ??

...Probably a moot point in many circumstances anyway since the first retrieved columns may often be included in the WHERE caluse.

Take care...
 
Another clause of interest is the ORDER BY.
In this case the field rank is important for indexing strategy (when the expected resultest is large).


Hope This Helps, PH.
Want to get great answers to your Tek-Tips questions? Have a look at FAQ219-2884 or FAQ222-2244
 
These are interesting points about index ranking - will bear this in mind when I write queries in future. In this instance the table has indexes on any fields brought back by the queries, and these fields are at the top of the query select list.

willir - thanks for your list of hints. The delays have been experienced on queries that only brought back specific records, so that factor is being limited to a degree. No memo's or graphics are present.

An update: The delays in bringing back records have now been reduced from 9mins to 2mins after reducing the length of the file-name and placing it at the root of the server. (no I'm not kidding - it was taking 9mins! This is over a cross-country ADSL link though).
2 minutes is still unacceptable so I aim to implement further suggestions.

Has anyone put in the 'always-open-connection' table in their network applications? I have a rough guide on how to put it in place but how does one close it at the end of the session? Is it really a useful tool in fighting network delays?

Thanks in advance,
 
I missed the answer to one of the questions above:

Was this formerly being run as a monolithic Access application locally by a single user, or was it really being run multiuser from a file server across a WAN connection?

If the latter, was it really a lot faster?

This seems like a really good candidate for a web application or some other true client/server application. Crunching the data near the data ought to really improve performance.
 
hi dilettante,
The history of the application is that it was run as a single MDB on the file-server with multiple users across the network (LAN and WAN) accessing it via a network drive. I believe this meant both forms and data were being sent across the network to each client PC (about 8-10 simultaneous users).

There was a slight delay like this (i.e. bringing back an individual record over the network by typing in its ID would take about 2/3 seconds and searching for a record by name would be 30secs +/-).

After splitting it out with a compiled MDE on the client and linked tables to the MDB database on the server, these timings have increased dramatically into several minutes.

Hopefully with the above changes to be tried soon, Access will be able to cope in a true client/server environment over an ADSL WAN.

Will the post the results shortly!
MrD.
 
Well...
- Consider a terminal server / Citrix server solution - this is the best solution in my opinion, and you may gain other advantages.
- Apparently domain aggregates are slower with fe/be. Consider changing code for DCount, DLookup, DSum...
- Replication is also a possible, but there is some administrative overhead, and make sure you back up and test.

My experience with Access databases of the WAN has not been pleasant. Incredibly poor performance issues as you described.
 
If Access needs to be used as a fat front-end, about the only fix is probably to use a back-end database that can do the heavy crunching near the data. That means MSDE, SQL Server, etc.

Otherwise a Terminal Server approach might be used though this can get expensive.

I still cannot imagine how things would be faster over a WAN using a monolithic Access application, but I won't argue with the results you are seeing. Be aware that Access does not support client/server processing in a simple split database, though it can serve as the client to a "real" (hate to use that word) database.

This is precisely why technologies like ASP became so popular. It made it somewhat easier to build a thin presentation layer connected loosely to a thicker logic layer near the database. This can be done using non-web technologies as well, but Access itself doesn't make it very easy for you to create a front-end of this type: it loves to talk directly to a database through a connector like DAO or ADO.

Building a non-web client/server system may be easier in VB than in Access. That gives you a more controlled and possibly "richer" client than you can get with a browser. The problem remains though that you need to partition your code into a presentation (client) piece, and a back-end piece. Then "glue these together" using TCP sockets, MSMQ queues, or some other flexible connector technology. The key is to send as little data as you can over the network.

A split Access application ships a whole lot of data back and forth because all of the code runs at the client.

Sorry, this doesn't help you much.
 

Willir,

I think your earlier comments are useful - but could do with a bit of 'fleshing out' for some people less able to 'read between the lines'

- Reduce the RecordSource for the form to a minimum such as the primary key, and perhaps a few key fields. Then use the OnCurrent event to retrieve the rest of the information for the specific record.

This approach is great if you need to user to be able to navigate sequentially from one record the 'the next' using the forms standard navigation buttons.
Whilst the whole concept of "next" can stir up a heated debate in it's own right amongst RDB theorists there are a lot of users who think they know what it means and demand the facility.
So:
In it's simplest form the 'RecordSource' on the form is then
"SELECT PrimeKey From tblDemo "
Then on the form you have a control ( eg. called txtPrimeKey ) that is Bound to the PrimeKey field in the recordsource. All of the other controls are UNBOUND.

Then in the On_Current event you put code that defines an ADODB.RecordSet on the table for just that one record

rstDemo_Open "SELECT * FROM tblDemo WHERE PrimeKey = " & txtPrimeKey
txtControl1 = rstDemo!Field1
txtControl2 = rstDemo!Field2
etc .. ..

What is missing from the above Willir ?
I think there must be something because my interpretation of the above is that it relies on the tblDemo being present as a linked table in the current Front End. Thus, as soon as you do the RecordSet.Open the entire table has to be shipped by Access from BE to FE regardless of the Where clause you append. This is the embodyment of the whole File-Server structure of Access and the cause of all these problems.

Or do you use
rstDemo_Open "SELECT * FROM tblDemo IN BEDatabaseName WHERE PrimeKey = " & txtPrimeKey

???

The other thing to mention about this approach of course is that changes to data on the form are not save automatically because the controls are unbound.
The developer has to detect changes and then reopen the RecordSet and explicitly save the changed data.




- Add combo or list boxes to search for the required information on the form. Instead of retrieving the entire table, the end user has to select a specific record or a subset of records. A trick here is to try to find and fail to find a record when the form is opened or loaded. For example, the primary key = 0. This loads the form with no records.

This approach I use a lot - but with a MySQL Back End rather that an Access Back End.

With a MySQL Back End it is easy because I inject the WHERE Clause into a PassThrough query and I simply get one record back at a time. ( Using the Client Server nature of MySQL & OBDC ) However, I can't yet see how you overcome the File-Server nature of Access when you are working with an Access BE.



- Tables with static information such as States and Provences can be stored within the FE.

And on a similar vein .. ..
Beware of simple looking combo boxes on your form that just happen to have their RowSource set to some large table.
I came across a case recently of :-
Combo.RowSource = "SELECT AgentId, KnownAs & ' ' & Surname As AName FROM tblAgent"
However as each record contained all the Agent Address and a whole raft of other data, with many hundreds of records in the table it took ages to load.
File-Server again !
I changed
Combo to text box
Added JOIN in the PassThrough query to gather KnownAs & Surname from tblAgent
Populated the text box along with the other unbound controls on the form
Added an On_DblClick event to the new text box to open a dialog box that allowed the user to select a new Agent or add a new one.



I'll be interested, as ever, in your comments to the above.



G LS
spsinkNOJUNK@yahoo.co.uk
Remove the NOJUNK to use.
 
Hi Graeme

One of my work buddies was in a car accident -- she is okay, but I had to pick up the extra work -- worked 20 hrs, and my mind was a little numb [dazed] when I read your post, but I finally got some sleep.

Your post was rich in details, very thought provoking, and contructively kind. Had a chuckle over the "next" record being a can of worms ;-)

One of my databases is for CallTracking / Support. Here, the "next" record is either a sequential record by date or by tech or customer.

...and yes, I redefine the Recordsource on the fly as required.

One trick I use is to actually display the main record in a subform. The main form will include the bare minimum primary key, perhaps a date field, and the foriegn keys required for the typical subforms embedded on the form. The subform used to dipict the main record looks as if it is part of the main form, and is populated per the current record on the main form. This way, I let the form do the work, and I don't have to read and test unbound text boxes.


...Moving on
I have never tested the network traffic after changing various parameters. I have seen noticeable improvements in performance using one approach over others, but I still "play" with solutions.

We purchased a cool network analyser today. The demo was impressive, so perhaps with a bit of thought and some free time (yea, right), I can do some more definitive tests. It will be a good project.

Your mention of MySQL introduced one point that may have been overlooked in this disucssion -- using a more full functioned database client / server applications (with stored procedures, etc) is very effective in improving performance. (Too bad about the MySQL virus released the other day -- speaks to the success of MySQL, but cant the virus creators leave the open source and such alone! [mad] )

Your experience with the combo box, SELECT AgentId, KnownAs... is a great example in how one control can mess up the entire form and performance if not done right.

Thank you Graeme for your post. You and PHV have challenged me on this topic.

Take care
Richard
 
Seeds for more research Richard !

( BTY I'm English , not Scotish )



Graham.






G LS
spsinkNOJUNK@yahoo.co.uk
Remove the NOJUNK to use.
 
UPDATE!

For the benefit of people reading this wondering about what worked / did not work - an amazing reduction in the initial delay being reported was achieved purely by switching 'Track Name AutoCorrect info' off (Tools => Options). This actually removed any delays when accessing the application over the WAN! (i.e. 7mins down to 4secs!).

After some research I found this 'utility' ensures the front-end form objects reference the same table column names, so if you change a column name in a table, the front-end form references will correct themselves automatically.

This is great while you are developing an application but definately not needed once you have finished. I figured this was causing Access to send loads of traffic over the network between the client forms and the server tables.

Highly recommended as one of many solutions to reduce delays after a BE/FE split!

MrD
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top