Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

DAO vs ADO

Status
Not open for further replies.

SeanR01

MIS
Jan 22, 2002
29
AU
This probably comes up as a variant a lot, and you might be tempted to point me to a FAQ, but:

1) What is the current state of play between DAO and ADO? A few years ago, around Office 2000, MS were trying to steer people from DAO to ADO. Apparently they are happy to support both still, and are in fact now steering people back to DAO for better performance in Jet databases. I read somewhere also that DAO has some better data access features -- somebody may be able to tell me what these are.

2) Given 1), if it is a programmer's intent to program with Jet in the first instance, but with the possibility of upgrading to SQL Server if data demands etc get too high, is it not better to code in ADO at the outset -- because ADO works with foreign database connections, whereas DAO does not? Otherwise a massive rewrite will be required, and the whole thing will have to be debugged again with new syntax, etc.

Thanks,
Sean
 
Yeah, you need ADO to play with other databases.

I remember the panic back around 2000, and converting TONS of databases from DAO to ADO. I think I even wrote a FAQ for hear.

One of the benifits of working with DAO is, IMO, the ability to access the DB Objects (Table Definitions and Query Definitions, for example) a little easier than you can with ADO.

Tyrone Lumley
SoCalAccessPro
 

Hi all of ya,

- if I understand you right, there is a way to do everything with ADO, it's only more difficult

So that means in other words: stick to ADO?

 
The general rule is use DAO for MDB databases, ADO for everything else. But if you think you will be upgrading to SQL Server, than use ADO even if you are currently connecting to MDB databases.

I tend to use ADO for everything, because I am more comfortable with it and whatever efficiencies I might get with DAO are not really noticeable most of the time.

Joe Schwarz
Custom Software Developer
 
Thanks for the pointers. I did some more searches, there's a thread here on it already also.

The probability of migrating to SQL Server might be only 20%, so it's still a little tricky knowing which way to go. Some things can apparently run 5-10 times slower in ADO than DAO also, and apparently you can still connect to a SQL server with DAO OBDC calls, altho this might be less efficient in the other direction. So I don't know if it's essential to recode DAO to ADO on migrating to SQL Server, perhaps you can just relink the tables and you're done.

This is not trying to do advanced stuff with cursors, stored procedures, etc.
 
You may find this interesting:

Note that Microsoft says:

[tt]DAO makes much more efficient use to the Microsoft Jet database engine than ADO and many operations are faster under DAO, sometimes up to 5 or 10 times faster, such as use of Batch updates. Another issue is that the calls made to retrieve schema information are inefficient when applied against Jet. This results in queries and updates against tables with a large number of columns being 30 percent to 80 percent slower than the equivalent query using DAO.[/tt]

30% - 80% slower is quite a chunk.
 
I think I'd rather look good performance-wise now by using DAO/Jet/mdb and worry about the ins and outs of migrating or upscaling later -- hopefully I won't have too many chunks of code to recode, as it will be a small number of routines used many times.

It turns out of course that Jet isn't being deprecated after all, despite earlier efforts by MS.

Hopefully I won't exceed the 2Gb mdb limit, and I'll probably have to split the db up into a number of linked mdb files, 1) to avoid corruption difficulties bringing down several modules at once, and 2) to ensure staying within 2Gb for the foreseeable future -- while losing inbuilt referential integrity features in doing so. So will have to do my own referential integrity checks in code in some instances. Certainly I'm concerned about mdb corruption problems a little, but some of my current systems corrupt once in a while, usually when creating a temp table from coded solutions or similar, and others never -- and yet all multi-user.
 
while losing inbuilt referential integrity features in doing so. So will have to do my own referential integrity checks in code in some instances"

Imagine that someone, someday, somewhere will eventually link to your back-ends somehow, without your code checks and unaware of the lack of referential integrity.

There are no triggers in Access to overcome this, so you'll be in deep trouble.

I'd go for only one back-end, even though I'd have to face the size limitation. Maybe think about archiving chunks of records and compacting the back-end at regular intervals of time.



HTH

[pipe]
Daniel Vlas
Systems Consultant

 
Thanks Dan. I'm ambivalent about the need for referential integrity checks at all in this particular application, and possibly the concept of using system inbuilt referential integrity in general as well.

In this particular app, I know it won't matter much if old records are dumped or archived and don't match up across tables any more -- it just won't matter because it will be ancient history, and the system will not be critical in an accounting sense, etc. However, I will try to avoid this happening as much as possible as well by careful archiving and 'manual' RI checks at archive time.

I believe a lot of programmers and ERPs do their own coded RI checks and demand that all operations go through the interfaces supplied, like deletes, archiving, etc. That's certainly the case in the commercial ERP I'm replacing -- I can't archive a supplier with open transactions, or delete a work order with a payment history, etc. because the interface does the checks and prevents it, not the RDBMS. I seem to remember in my comp sci lectures that it's 'better' to do checks at the programming level than system or DBMS level wherever possible, as you have better control over outcomes, you don't get cryptic system messages - you get programmer ones, etc.

For instance, the current ERP would never allow you to change a supplier code and then cascade it through the system -- you can only stop one supplier and start another -- and you will not be able to archive the first supplier until all accounting transactions are closed against them -- this involves checks in code.

(I've only used RI once in a simple, small-scale Access app where it was convenient to use 'cascaded updates' of a key field or similar, whereas doing it programmatically would have been much more cumbersome and wasted effort across multiple tables.)

I'll just do careful data validation via combo boxes etc to restrict user choice to current suppliers at the time of a transaction, for instance.

Archiving is another option, I guess -- I ported out 5 years worth of work orders from the ERP and it produced a 250 Mb MDB file before anything else has been done in the db. Coupled with a similar or greater amount of historical financial data, it might start to add up pretty quickly.

I'm also worried about too many users in one mdb file at a time and the possibility of corruption also, hence was tempted to segregate the files.
 
You are right, there probably are programmers around that, in stead of using the database engine for what it is designed to do, and does very well, use lot of time to reinvent the wheel, and program their own RI, integrity and validation checks, but my impression is that they are getting fewer and fewer.

I call them "Kamikaze-programmers". What they create, can best be described as disasters waiting to happen - and not seldom, the consequences of such disasters, also influences the terms of their employment.

But - that there's still courses that not only encourages this, but also says it's best practice to do validation and RI stuff at the application level, is sad. I think you should ask for a refund.

The good developer will always use the database engine to ensure integrity, perform datavalidation and enforce RI. This is simply what you have to do, if you value information. Good developers will usually also perform some testing and validation at the application/interface layer, for the convenience of the user, and/or to reduce network traffic, but keep validation and RI at the engine level as main means of ensuring no bad information enters the database. As danvlas comments, what happens when there's a need to access this information from other clients (VBScript, web-pages, other Office products, classic VB/VB.net apps...)...

The good developer will know that regardless of competence, no way they can outperform the database engine in what it is designed to do - say, what happens when you forget the proper validation somewhere? Or worse - what happens if you loose power in the middle of some "custom validation/RI" operation? For some engine level stuff, the engine will either perform the whole operation or rollback, while you have no control over a custom operation...

Ask yourself what will happen if data is lost, and the loss could have been prevented by using very simple basic means which is taught in any entry level database course.

To double the Jet capacity, you could try SQLExpress, which has 4GB vs Jet 2GB

Roy-Vidar
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top