Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Determine Table Size?

Status
Not open for further replies.

Bchemist

Programmer
Apr 3, 2002
91
0
0
US
I have read a few times that people have asked this question but it always seems to degress into a simple question of "WHY WOULD YOU WANT THIS INFORMATION?" I have a valid reason.

I am trying to find out how large each table in my database is so that I can decide how to split the back end tables into multiple backend tables.

Summary of my issue:

I have a frontend and backend table set in access 97. The backend table set is 938mg large (all data that can be removed has already been archived) so you see that I am coming right up on the 1gb limit that access 97 has implemented. I am trying to access which tables (close to 600 tables at this point) are the largest so that I can account for that when I go to split my backend table into multiple backend tables. (yes I know this needs to be SQL Server, but I can't get the money ok'd to spring for SQL Server, so I am stuck with what I have licenses for, thus 97 and not newer as well)

Any ideas? Short of opening each table and taking the record count.

Thanks for anybodies help, and if I left anything out feel free to ask.
 
Short of opening each table and taking the record count
I wouldn't like to open them all by hand but it ought to be possible to loop through all the tables and multiply the record count by the total size of the fields. This won't give you an accurate answer because of all the non-data overheads (things like indexes and captions) but it sounds as though you'd be happy with a size to the hundred kilobytes.

Geoff Franklin
 
I think the reason that no-one answers this is because tables in a database don't actually have a size - they just occupy the space available according to the properties of those tables and the data within.

Geoff's advice is sound. If his advice allows you to identify the really big files, what you could do is export them as db or dbf files and see how big these are. The problem with this is that the filename is restricted to 8 characters so you need to be careful about overwriting previously created files.

An empirical, though cumbersome way could be to use VBA code to read each table and write it out as a flat file. Put them all in the same folder, sort by size and there you go! All you'd need would be time!
Simon Rouse
 
Thanks for the information. I think with the combined ideas I might be able to figure something out. You are right I should be able to loop through each table and export it to a new file and then look at each file size and use that for my decisions. I don't mind then going back and splitting it into 3 or 4 larger tables.

Thanks for the info
 
The database documenter will easily tell you the "record size", and a simple loop with a query will more easily return the record count for all tables. This information easily gives the first approximation of the size of the "Table".

Other threads in these fora can be used to find all of the object names in a database (do a search on MSysObjects) and one recent one even showed a technique to get the list of all .MDB objects in a domain, so the entire process should easily be ammenable to a bit of easy code,




MichaelRed


 
Although, again, I'm not quite sure that this is of any real / pratical use. I would be more inclined to look into splitting the BS into seperate entities based on some logical coherence, such as sets of tables used buy specific groups of users (possibly based on ye olde -and much maligned Access Security model). This MIGHT present a relatively small core of tabloe needed by most / all, with supplemental stores used by various departments . functions.

Although this is, perhaps, more challenging, a soloution based on useage could be a much longer lasting soloution than one based on size alone.





MichaelRed


 
As a purist view MichaelRed I'm sure you're right. Ideally a database of this size should gone into SQL Server as
Bchemist admits. Bit I'm sure that money and pragmatism are the issues. It is entirely possible that after doing the analysis you suggest that 2 or 3 enormous tables logically live in the same database and so don't improve matters at all.
Some questions that we've missed so far:
Bchemist - a) has anyone analysed the database to check that it is properly normalised and that there isn't replication or redundancy in the data? b) Does all the data need to reside in the database? Can any be purged or archived so that you only have current data routinely available but with historical data in separate databases. With that much data and so many tables I suspect you have response issues that would be helped by such a process.
Simons
 
DrSimon you have hit the nail on the head. The reason I am trying to figure out which tables are which size is to know whether I will have say 2 tables that are 400 mg apiece and then would defeat the purpose of splitting them apart to into one table even if they logically go there. Just trying to get all my ducks in a row basically.

As far as your questions about normalacy, yes you are correct there are normalacy issues that I have been working to correct (this was initially a comercially purchased database for around 50K that was bought before I came on) It was horrific, non-functional, and didn't even get much past first order normalacy. Basically this database runs all parts of our company (an information based company that provides scientific analysis to its customers) from receipt of samples and orders, to product (a report in this case), to invoiceing and financials. You can imagine that changing databases of this nature are quite the task. The normalacy has been much improved, and should be completed with the spliting of the tables (I hope) Since I can't ever really take all the database offline (I have built functions to lock users out of parts of it so I can work on the tables while they are live, and make code updates to a local copy of the data) I have to have all the information before I start a project.
 
.. and any chance of archiving? Surely you could move some of the financial data belonging to previous accounting years elsewhere. And guessing by your alias that you're a Chemist or Biochemist, you could equally archive results on the basis of dates of analysis.
And while I understand the SQL Server is pricey, what about upgrading from Access 97 to something newer? Not wishing to be pushy or give you bad luck - if the database runs all parts of your company it seems somewhat foolhardy to run on software that is no longer supported by Microsoft.
 
Why not try MySql or m$ MSDE they are both for free, and can be d/l from the net.


Herman
Say no to macros
 
Bchemist,

I know this thread is leading into a number of directions, but did you ever get to the point of iterating through the table defs collection?

I created a table tblTableInfo with fields:
tblName, {text}
tblFieldCount,{number}
tblRecordCount,{number}
tblUpdated{date}


The following code on a command button clears out any data in that table and then appends all of the table names, column counts, record counts and last update dates to the table.

Code:
Private Sub c2_Click()
Dim dbs As DAO.Database, rst As DAO.TableDef
Set dbs = CurrentDb
DoCmd.SetWarnings False
DoCmd.RunSQL "DELETE * FROM tblTableInfo;"
For intI = 0 To dbs.TableDefs.Count - 1
Set rst = dbs.TableDefs(intI)
DoCmd.RunSQL "INSERT INTO tblTableInfo (tblName,tblFieldCount,tblRecordCount,tblUpdated) SELECT '" _
& rst.Name & "', '" & rst.Fields.Count & "', '" & rst.RecordCount & "', '" & rst.LastUpdated & "';"  'Append to table
Next intI
DoCmd.SetWarnings True
End Sub

I realize you may be well beyond this point, but if you're still trying to get your ducks in a row...

HTH


John

Use what you have,
Learn what you can,
Create what you need.
 
DrSimon, you are correct I am a chemist and biochemist both, by education, but make my living programming and doing it (better pay, and I have 20+ years experience at it so the degree doesn't matter that much) As far as your question about archiving, yes data can be archived, and is on a regular basis it is one of several methods I have been using to keep the db size down. But I can only archive some much, the problem comes that we have to meet certain government requirements, including keeping and having available all analysis for 7 years. That makes for a tremendous amount of data being required to be on tap when you consider that most of the data is coming from scientific instrumentation and can have hundreds of thousands of iterations per sample. I am constantly in negotionations with the board to go for the sql server option, and am close at this point, but it will still be a major project to update the code.

hermanlaksko, I am not very familiar with MySQL, but MSDE I have used, and considered for this situation, but I have a problem of to many simulateous connections. Anywhere from 20-50 and any given time (yes, I know one more reason for the SQL Server upgrade)

boxhead, thanks for the code, that actually gave me some very good ideas. I have used a modification of that code to dump each table to its own file. Now (excluding indexes and relationships) I have the exact size of each table. Thanks
 
I suspected we had a similar pedigree! And I understand the problem. Admittedly I don't have all the facts, but I'd say that you and your company are at great risk unless something is done about this - but I suspect I'm teaching a grandmother to sucks eggs. If what you say is correct your board doesn't seem to understand the actual value of the data stored nor appreciate what will happen when (not if) things go pear-shaped. I hope for your personal protection you have clear records of any warnings given, if not, start now!
From what you say above, splitting the database as you seek is only going to buy you a bit of time before one of those tables hits the 1Gb limit. If the board really refuse to play ball, another approach could be to put the raw data in another database (?MySql) but leave code and indexes in Access. Yes this will mean a significant amount of development and probably a risk to your organisation but if the board don't understand the risks you've already told them about, does it matter?
 
DrSimon you are really spelling it out, here. But there is nothing like using the correct words, to make people understand.

Though I do not think that, I would put it on, quite so strong I agree, totally.
No doubt this database is longing for desaster to strike!.

BChemist, make sure that your backup is up and running! ;-)

Herman
Say no to macros
 
BChemist,

DrSimon had siad, "your board doesn't seem to understand the actual value of the data stored" and that seems to be pretty accurate based on your other comments.

It could be that they see this request for further IT expenditure as just another "new technology upgrade".

If you haven't already, maybe you could help the board to understand the situation better by getting other people in your organization in your corner. Since you said 'this database runs all parts of our company', I'd make sure that Finance knows the implications of corrupted data and Sales knows about possible invoice issues.

Sarbanes-Oxley requirements, ISO standards and government requirements you cited earlier are good reasons for securing your data. These may be concepts that your board members better understand.

Best case scenario: your system crashes and a fix is forced upon you.

Worst case: your data becomes corrupted, you back up the corrupted data, and a year from now you're trying to figure it all out.

John

Use what you have,
Learn what you can,
Create what you need.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top