Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Amount of records?

Status
Not open for further replies.

Snappy2873

Programmer
Mar 29, 2002
54
US
I currently administer a Paradox 8 dB that is approaching 100,000 records. Does anyone know the max for a RAD tool like Paradox? We are trying to decide if moving to MS SQL is the right thing to do considering the time it takes to run the extensive scripts that generate our result sets. Our bogdown time is increasing along with our record count so it seams that sooner or later it's gonna explode and it'll be too late. Any recommedations would be greatly appreciated.
Thanks,
snappy
 
number of records doesn't really mean much. 2 gigs is max table size. I'll re-post something I got off the newsgroups sometime back below for you:

-----------------------------------------------------
A Paradox table can contain 64KB (65536) blocks. The default blocksize is 2KB. The maximum filesize for the default blocksize is 128MB. The number of
records in a block depends on the recordsize.

You can increase the blocksize used in the BDE Administrator, to 4KB, 16KB or 32KB. The larger blocksizes will require that you change the table level from 4 to 7.
-----------------------------------------------------

Mac :)

"Do not delve too deeply in the arts of your enemy and so become ensnared by them"

langley_mckelvy@cd4.co.harris.tx.us
 
Snappy2873,

Actually, the documented limit ( for Paradox's maximum number of records is 2 billion and its maximum file size (for the .DB file) is which is 2 billion bytes or ~2GB. However, as Mac points out, you're more likely out of record blocks before you run into the other limits.

In my experience, poor Paradox performance usually stems from inopportune design choices rather than physcial limitations. A poorly designed database will always perform poorly when compared to a properly designed one.

To illustrate: Once upon a time, I was working with a company that was being considered as a possible vendor to rewrite an older Paradox system that contained 250,000 records in the main table.

The primary user needed to locate individual records in this table and tended to use QBE queries to locate them. Each query took seven minutes to complete. Since he did most of his work on the phone, this was clearly unacceptable.

In watching how he used his data, we decided to try something. We noticed he always searched the same field, so we added a secondary index on that field (14 minutes) and then created a quick and dirty form that used tCursors, qLocate(), this new index, and a UIObject resync() call to find the information he needed.

We had to time the results of this several times, but after multiple tries, it was clear that each request was being processed in less than a second. Even though the timings were unscientific, the performance benefits were obvious.

In addition, 100,000 records seems like a lot when you think of your addess book, but for things like MS SQL Server, Oracle, InterBase, and similar products, it's hardly a drop in the bucket compared to their designed capabilities.

Remote database servers come with some pretty high price tags and require some significant retraining before one feels competent.

In short, you may find a better return on investment by reviewing your current application for design techniques more suited to larger databases, such as the index-based approach described earlier.

Finally, make sure you're keeping regular backups. That's an awful lot of data and you'd hate to be the one to have to retype it, wouldn't you?

Hope this helps...

-- Lance
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top