Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Improved stability Access FE/SQL backend

Status
Not open for further replies.

RosieMP

MIS
Jul 20, 2007
25
US
I have yet another Access fundamentals question, this one about another DB....

We have a DB that is being rolled out from a group of about twelve users to a potential pool of maybe fifty or more. They won't all use it concurrently, but some will.

I understand that Access isn't really intended for many users over a wide network (though I also understand lots of people do it and get away with it) but I was wondering if there is added stability by moving the tables to a SQL server and keeping the front end in Access. Does that alleviate the burden on the connections in Access or is Access in that case doing the exact same amount of work?

Hoping that made sense.

Thanks
 
Rosie

Virtually all the Access databases I have developed here at work use SQL as the backend database.

I have never come across a problem yet with having multiple users accessing the system at the same time.

On my databases, each user has a copy of the Access FE on their own local PC and I have some code which automatically updates their copy if I make any modifications to the master FE.

In addition, one benefit is the security and permissions side of things. I use ODBC to connect to the SQL with a trusted connection and then all permissions etc. are set in SQL which is much easier than doing it in Access in my opinion.

I tend to create the roles I need per database and then assign each user to one of those roles.


Also, I believe that if you connect to an Access BE and you want to run a query on a particular table for example, your FE will pull back all the records from that table and then run the query on your FE. Where as connecting to SQL, the query is run on the server and only the records required are pulled to your FE.

Someone may wish to correct me on this!!
 

In that case Rosie you have a database with linked tables to SQL Server. So give everyone his copy of the mdb file and let the server take the load! One user per mdb, thats the least it can do afterall, isnt it?
 
For that number of people, moving to SQL is a better option. The problem is that with locking/unlocking records and other database issues, Access can end up corrupting the data if things aren't done just right. SQLServer handles all that much better and you won't have a data corruption problem. So absolutely, you will have better stability.

As for the issue of connections - it really depends on how you program it.

I use ADP files, and not linked tables, so I don't know if the advise I'm about to give you fits every scenario - but it's what I do.

In a situation like this, you really want to make sure that you only hold connections open for the time you need it and then close promptly, and that you only bring over the data you need.

For instance, when I started building DBs, I did what most Access developers do - make a table, and then have a form that sits in front of it. You open the table and all the records are there for you to search through. But if the users go from screen to screen frequently, the load on the server and across the network will be heavier, especially if you have large datasets. So instead, I created a search screen and the users first find the record they want and THEN open the form, loaded with exactly what they are looking for. This runs a lot faster because only small datasets are being pulled across the network.

Hope that helps.



 
Very helpful--thanks everyone.

Can anyone provide some context for what the SQL Server owners here have to go through when I ask them for this? They have SAP tables on it. My tables would be totally separate, though I'd like to get read Access to the SAP stuff just for other projects.

How difficult is it for them to give me a couple of tables while protecting the rest of their data? Is this something that will fill their hearts with fear? (I suspect it will. We are in separate buildings, separate departments, separate states.)

Thanks again
 
Have them give you your own database that you have db ownership have. Then have them give you whatever level of permissions is appropriate for those few additional tables. They should be comfortable doing this, because in SQL security settings are pretty straightforward. They can give even you read only permissions or permission to only update specific fields! Much better than Access security.
 
Hi Rosie,

Have you decided whether you will use ODBC-linked tables versus making an ADP (Access Data Project)? Here's what my experience has been:

ADP
***
1. Better performance, the connection is more direct, and the wizard will upgrade your queries to stored procedures, functions, and views, which are almost always faster
2. Usually requires a little more after-upgrade tweaking to get everything to work
3. You can't have local tables - all tables have to be on the server
4. The navigation controls look a little different, maybe just enough to intimidate your users

Linked Tables
*************
1. Little or no after-upgrade tweaking required, the users may not even realize you have switched
2. Your performance won't increase as noticeably as an ADP. In fact, if you have very complex queries they may end up running much, much slower. This is no doubt due to the fact that Access has to fetch all records before it can begin filtering. In a recent project, I replaced the loading of certain listboxes (that used very complicated queries) with a new stored procedure that I called and loaded into a recordset, then bound the listbox to the recordset. The listbox loaded in a few seconds, as opposed to about half a minute.

As for your DB Admins, it should not bother them to give you read-only access to any tables. If it does bother them, that's an indication that they don't know what they are doing.

 
<new stored procedure that I called and loaded into a recordset, then bound the listbox to the recordset

I don't think that Access has to fetch all the records to the local context and then filter out the ones that don't fit the query. What I do believe is that the records that fit are fetched on a one round trip per page basis, because that's what happens in similar situations in the VB6 world. IMO, what your stored proc call does is dramatically reduce the round trips required to fetch the data, and this is where you get the performance upgrade.

Bob
 
JoeAtWork said:
This is no doubt due to the fact that Access has to fetch all records before it can begin filtering.
OK, I knew I was setting myself up when I wrote that. It's not true for a simple SELECT statement, which would be sent off to the server to get processed.

But for complex queries (the type where one query is based on another query which is based on another query, etc.), somewhere down the line there will likely be some Access-specific syntax that throws a monkey wrench into the gears. For example, if one of the queries uses an Nz function against a field, Access will have to retrieve all the records for that table to process the Nz, since SQL Server does not have this function built into it's SQL Syntax.

 
I may be wrong about this - but I think I remember the NZ function being a "if the field is null, provide value1, otherwise, provide value2."

If my memory is correct, SQL DOES accomplish the same purpose - it just needs to be rewritten as the following:

Case When myfield is null then value1
else value2
end AS newFieldName

Actually, this is more powerful because it allows multiple "when" statements.

The only limitation I have found in SQLServer as compared to access is crosstab queries. It is very difficult to make a crosstab query in SQLServer 2000. However, 2005 provides keywords to allow a similar function, so I believe even that hurdle has been jumped.
 
belovedcej - yes indeed you could use a CASE statement or an IsNull function (which is very similar to Nz).

My point was that in a query that uses Nz (or any other Access-specific function) you would need to rewrite it, otherwise you are likely to face a much slowed-down response. I wanted to let the OP know that after upgrading to SQL Server she may need to test her queries to find out which have become slower, and replace them with something more specific to SQL Server.


 
<it should not bother them
Joe means it shouldn't bother them unless they have things in them like payroll data, or background check data, or social security numbers, credit card numbers, and so on. :)

<I wanted to let the OP know that after upgrading to SQL Server [etc]

Yes, absolutely. But I also wanted to make plain that there's an issue of how the consumer and provider handle data fetching. My feeling about your performance improvement was that it stems from changing from fetching one record per server round trip to pulling them all in a single round trip. I'm trying to find an article I read about all this some time ago, and can't seem to find it. I did find this, which is an example of the problem I'm describing: This isn't an Access example, but there are several situations where similar things can come up in Access. I'll keep looking and post if I find what I'm looking for.

Bob
 
From
Fetching a row from the cursor may result in a network round trip each time. This uses much more network bandwidth than would ordinarily be needed for the execution of a single SQL statement like DELETE. Repeated network round trips can severely impact the speed of the operation using the cursor. Some DBMSs try to reduce this impact by using block fetch. Block fetch implies that multiple rows are sent together from the server to the client. The client stores a whole block of rows in a local buffer and retrieves the rows from there until that buffer is exhausted.
What I'm saying is that I suspect that this is the underlying reason that you got the performance boost, that you replaced a row-by-row fetch with a block fetch.

Bob
 
Rosie,
I work with SAP, though it's an Oracle db. However, in general you will *not* be able to connect directly to the SAP tables via ODBC or OLEDB. This is an SAP restriction, SAP accesses the data via a middleware application server, and a proprietary password is used internally.

If you have indeed gotten around this via your BASIS people, then you are on your own as far as any issues that might arise. Sap table architecture is quite complex for many of the key areas, so for example you couldn't just go to, say, the VBAK table and assume that if you summarize that by date then that's your daily sales-order totals.

A simple example is that there could be credit memo's issued against that order--that is, SAP does't delete/void then redo an order, that order lives forever and a credit memo is issued, possibly on another date, and then a new corrected order is created. There's an entire document flow involved where you've got to be aware of subsequent or previous documents that may affect the values in the current document.

There are BAPI's that can fetch data for you, and in the new NetWeaver SAP there are other ways of getting to data, but in general you won't get a live odbc link. What I do is write ABAP code that dumps selected daily data into a .csv file and import that. It seems kludgey but with SAP locked down so tightly, this is a safe way, and this way the ABAP code can consider the doc flow and fetch the correct data that's needed.
--Jim
 
The queries I replaced were very complex (about six layers deep in query upon query) with unions and the results of one query filtering another query, etc. I have no idea what Access actually did under the hood, but I strongly suspect that for many of the tables it had no choice but to return all records to process them client side. If it had to do row-by-row fetch, well that would just have made things even worse (indeed, some screens just hung - I don't know how long they might have taken if I hadn't shut them down manually).

I have had several projects recently where the project manager thought that just by switching to linked SQL Server tables we would see huge speed improvements because we were "using the power of SQL Server". For relatively simple queries there was some noticeable (but not remarkable) improvement. But for the most complex screens I had to create new stored procedures and set the binding to recordsets (after which there was a huge speed increase).

The moral here is that to really benefit from the potential power of SQL Server, you need to create server side objects like stored procedures, views, functions, etc. so that as much of the work as possible is done on the server. That's why ADPs are better than MDB's with linked tables, because the ADP conversion wizard will turn most of your queries into server side objects.

 
<you need to create server side objects like stored procedures

The article I linked in my last-but-one post is an interesting example of a row-by-row fetch engendered by a stored procedure. Just to be clear, this isn't to contradict what you're saying, Joe, but to amplify it. Even with using stored procedures, you still have to be careful.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top