I am involved in a archiving project that will require backing up historical data from mainframe system tables.....some of which are huge (1 Million plus records and up to 100 fields). Just looking for some feedback that I can use in evaluating Access as a suitable back-end. I know there is a "theortical" limit of 1 Gig for a table, but am curious to hear some comments on a table's "practial" limit..ie at what point do things become unstable and/or performance really suffers.