Response time depends on the application and what is considered acceptable. For example, in my application there are some reports that take a couple seconds to run. These are infrequently used reports that usually generate a 1,000 page document. There are other operations within my app that take < 10 milliseconds.
When writing queries, or examining my application for response time, I usually devote most of my attention to queries that are run most often. I would not tolerate an often run query that took 3 seconds. Likewise, I wouldn't spend much time trying to optimize a 1,000 page report that only takes 10 seconds to create.
I remember reading an article discussing how, decades ago, mainframe programmers would spend weeks optimizing 3.8 sec. responses down to 3.3 sec. because it was considered that important.
Then the dial-up Web came along and a 10 sec. response time on a page update was considered OK.
As George said, "optimal" will be in the eye of the beholder.
Jeff
[small][purple]It's never too early to begin preparing for [/purple]International Talk Like a Pirate Day "The software I buy sucks, The software I write sucks. It's time to give up and have a beer..." - Me[/small]
If returning a single value or a few rows, I always shoot for subsecond. Anything returning a large resultset will not be able to be completely "optimized" as seen by the end user because it takes time to move data over the network, as MasterRacker eluded to.
Also, my personal general rule of thumb is that any query returning a relatively small resultset which takes more than a few seconds is more variable. What I mean is that in my experience, subsecond queries are usually always subsecond queries. 10 second queries might be 10 seconds one day, 30 seconds on another and two minutes another time.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.