Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How could you test performance on big table

Status
Not open for further replies.

netsmurph

Technical User
Mar 19, 2003
63
GB
I have been asked to investigate whether a document updating system written in MS Access could be used on a much larger project and think it may push Access to its limit.

I was hoping to get some impartial advice based on any experts experience with similar projects:

The main document management system is an incarnation of Documentum, but all the access system does is takes an ODBC snapshot of the main table first thing in the morning. During the day updates are done by engineers on redlining (document checking/signing off/etc). This is done by each engineer using a local copy of access with linked tables.

Engineers are mainly adding new records that are related to each document, so record locking does not present an issue.

At the end of the day, one end of day procedure updates three tables in Documentum from the main Access DB.

This seems to work fine with the current current volume of data (160 MB), with 15 engineers using the system (although not all concurrently).

The idea is to roll this system out to other sites separately (these will work as independent systems, so not aggregating all data).

The biggest project has that is in mind is maybe 4-5 times the volume of data, with upto 40 engineers that would need access to a replica of the current system.

Although only a couple of very senior managers would need to run off large reports, all other users would only need to see the documents that are flagged for their attention (this keeps the bandwidth utilised down to a minimum).

Although I must stress that this would be an interim solution, my question is:

Would access be likely to handle it based on this very short brief or would anyone be able to suggest a better way, still using Access front end (ie having the main table a linked table running in SQL Server)?

Could anyone suggest how to go about measuring performance to see how it is testing Access in a simulated environment?

Sorry for the long post, but i needed to cover all key points.

Many thanks if anyone can point me in any kind of direction.

Andrew

 
netsmurph,

I'm not sure I fully understand your situation but based on the data volumes you quote I can't see any particular reason Access couldn't handle this kind of load.

Access's main constraint when working with large-ish datasets (assuming the database design is correct) is network bandwidth, rather than data volume. This is due in a large part to the way Jet runs queries on data (all queries are run locally, rather than at the server). I have built and maintain a number of systems with considerably more data than this and they all function satisfactorily.

Unfortunately system performance testing is difficult to perform accurately without testing in a real-world environment, so this means at least prototyping the system and trialling it on the live infrastructure.

What will work and what won't on your network is something you will pick up over time, so I'd be reluctant to give any kind of definitive answer here.

Having said that I think this is probably doable without resorting to using a different database engine to Jet. As with any database design project concentrate on getting the user and code interfaces correct first, carry out performance testing to identify any problem areas, and then fix the specific problems.

Good luck!

Ed Metcalfe.


Please do not feed the trolls.....
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top