I have been asked to investigate whether a document updating system written in MS Access could be used on a much larger project and think it may push Access to its limit.
I was hoping to get some impartial advice based on any experts experience with similar projects:
The main document management system is an incarnation of Documentum, but all the access system does is takes an ODBC snapshot of the main table first thing in the morning. During the day updates are done by engineers on redlining (document checking/signing off/etc). This is done by each engineer using a local copy of access with linked tables.
Engineers are mainly adding new records that are related to each document, so record locking does not present an issue.
At the end of the day, one end of day procedure updates three tables in Documentum from the main Access DB.
This seems to work fine with the current current volume of data (160 MB), with 15 engineers using the system (although not all concurrently).
The idea is to roll this system out to other sites separately (these will work as independent systems, so not aggregating all data).
The biggest project has that is in mind is maybe 4-5 times the volume of data, with upto 40 engineers that would need access to a replica of the current system.
Although only a couple of very senior managers would need to run off large reports, all other users would only need to see the documents that are flagged for their attention (this keeps the bandwidth utilised down to a minimum).
Although I must stress that this would be an interim solution, my question is:
Would access be likely to handle it based on this very short brief or would anyone be able to suggest a better way, still using Access front end (ie having the main table a linked table running in SQL Server)?
Could anyone suggest how to go about measuring performance to see how it is testing Access in a simulated environment?
Sorry for the long post, but i needed to cover all key points.
Many thanks if anyone can point me in any kind of direction.
Andrew
I was hoping to get some impartial advice based on any experts experience with similar projects:
The main document management system is an incarnation of Documentum, but all the access system does is takes an ODBC snapshot of the main table first thing in the morning. During the day updates are done by engineers on redlining (document checking/signing off/etc). This is done by each engineer using a local copy of access with linked tables.
Engineers are mainly adding new records that are related to each document, so record locking does not present an issue.
At the end of the day, one end of day procedure updates three tables in Documentum from the main Access DB.
This seems to work fine with the current current volume of data (160 MB), with 15 engineers using the system (although not all concurrently).
The idea is to roll this system out to other sites separately (these will work as independent systems, so not aggregating all data).
The biggest project has that is in mind is maybe 4-5 times the volume of data, with upto 40 engineers that would need access to a replica of the current system.
Although only a couple of very senior managers would need to run off large reports, all other users would only need to see the documents that are flagged for their attention (this keeps the bandwidth utilised down to a minimum).
Although I must stress that this would be an interim solution, my question is:
Would access be likely to handle it based on this very short brief or would anyone be able to suggest a better way, still using Access front end (ie having the main table a linked table running in SQL Server)?
Could anyone suggest how to go about measuring performance to see how it is testing Access in a simulated environment?
Sorry for the long post, but i needed to cover all key points.
Many thanks if anyone can point me in any kind of direction.
Andrew