No, I work for Purdue in the Division of Financial Aid as a programmer/analyst. My actual experience as a programmer is limited to the two years I have worked for Purdue, my undergraduate education is in Physics. I would be interested in seeing the MS article if you could post the site I would appreciate it. I am always willing to learn, that is how I got my job here! Without seeing the article, my tendancy would be to disagree with the denormalization of any databases, but especially large ones. I think it is important to have one place that the information is considered to be 'best' or at least most recent. I do understand that many reports or letters, among other things, are faster to produce from denormalized data and therefore the 'push' to denormalize. I haven't run into a situation (like yourself) where the amount of data is so large that denormalization makes a huge difference since Purdue still stores its info in flat files!! (Which, technically is a denormalized nightmare!) Our efforts at using Oracle are still young (5 years) and the flat files are considered authorative since they are provided by the government and the government insists that their data be used as authorative. Anyway, it would seem that you could produce denormalized data from your production database to run the reports you need and that way keep the one place for each piece of data concept. I spent many years in industry and I know that there is nothing more embarassing then having two or three different addresses for one customer and not knowing which is correct. I even saw one situation where the address was corrected in the accounts receivable dept. and no one else knew about it. We kept shipping their stuff to the old address until one time that it finally came back to us two months later(having been set aside by UPS!) and we had already billed them for it and sent out a final notice! Needless to say, the customer was not happy!
I don't know if it was you in this thread or possibly someone else in another thread who mentioned the idea of having live data fully normalized and generating separate back ends for producing major reports, with the data in those backends denormalized for speed. Seemed like a pretty good idea for organizations that do those kinds of mass production bits regularly.
Thanks much for all you contribute here. Great stuff.
Jeremy =============
Jeremy Wallace
Designing, Developing, and Deploying Access Databases Since 1995
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.