Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

procs vs dynamic sql 2

Status
Not open for further replies.

jmeckley

Programmer
Jul 15, 2002
5,269
US
this has absolutely nothing to do with asp.net or associated environments. but the topic has been scratched is a few different post. It's not really a helpful tip either, but I figured the light bulb is a better choice than the question mark.

I'm of the opinion that procs should be avoided as much as possible. other believe that procs are the gateway to the database. there are many reasons why a developer, team, company would favor one over the other.

I've been giving this some thought. Why have I choosen dynamic sql (via ORM tools like LLBL and NHibernate) over procs?

testability first comes to mind. But having testable code doesn't make good code. it' just means the code passes a specific context. so testing, as a stand alone argument fails.

transparency. I can write sql statements for any database without changing my code or database. all I need to do is update my ORM configs and I'm set. Once deployed this isn't too much of a concern. You wouldn't arbitrarily change your database. However during development, testing, demo, I may be using a different database. When writing light weight integration tests I favor the SqLite database. My testing database uses SqlExpress and my production box is running Sql Server 2000. I can seamlessly switch between environments without synchronization issues.

I also get the power of using OOP to shape my queries in complex scenarios. i can't get that from a stored proc (to may knowledge).

I don't think you can argue performance. in some ways it's like comparing apples and oranges. and ORM needs to hydrate the entity/projection and a stored proc gets the raw data. Mature ORMs also have many optimizations built, between db specific db tweaks, caching (both query and results), entity maps, etc. this can reduce the number of hits to the database.

security, is also a concern. but depending on the sensitivity of the information this may be over kill. for example; a blog doesn't need tight db access, however a financial or medical institution would need the highest level of security. but this can still be handled with dynamic sql. With systems you would also have security baked into the application itself, so is tightening down the db a requirement? I can only say that depends.

I'm interested in your thoughts on this subject. do you favor procs of dynamic sql? what are your reasons behind that decision?

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
I prefer procs for three main reasons:

1. SQL in a stored procedure is optimized for performance after the SQL Server runs it the first time. This doesn't happen with dynamic SQL.

2. Keeping the business logic in a proc allows to reuse the procs with multiple applications. This way I don't have to write the same SQL twice, or change it in multiple places.

3. I don't like mixing code. Keep the SQL on the SQL Server and the ASP.NET code on the web server.

Eric
 
points taken, a few years ago I would have agreed with all these points.

1. you are able to optimize dynamic sql by preparing commands (if the db allows). this reduces the number of execution paths. with NH this is as simple as prepare_sql = true in the configs.

2. this opens discussion about where logic belongs. My issue with putting logic in a proc is testability and maintainability. I cannot easily get that with a proc. if I need reusable components in an application I can create an assembly project and reference across projects. i can also create objects, which are essentially queries, to reuse components. I believe this gets into the topic of Command Query Separation (CQS), but I might be wrong.

3. I totally agree. I don't like sql strings in my code either. However I will settle for HQL as an embedded resource, or criteria builders within the code.



Jason Meckley
Programmer
Specialty Bakers, Inc.
 
I prefer stored procedures for a few reasons:

1. Performance

It's not so much the cached execution plan as you have rightly pointed out that dynamic sql can cache these (not all the time though so there will be times when a sp outperforms the exact same sql which executed dynamically). However, relying on an ORM to produce the sql means that you can't guarantee that the best execution plan is produced. This can lead to significant bottlenecks which you may not be able to control and so you can't easily fix this bottleneck. It also means that you can't tweak the query with some well known tips for making sure index seeks are used. If you are only working in a small environment with non-critical systems that only have a few users then this may not be much of an issue, but I think you should always develop applications with scalability in mind and this is what stored procedures give you that dynamic sql doesn't.

2. Maintainability

If you have all of your database code written in your application and something data logic related changes (for example, there is a now a need to exclude all records with an X status) you now have to make that change in your application and go through the process of rebuilding, deploying and incrementing the version of your application[!]*[/!]. With a stored procedure approach, you simply make the change and it's instantly available within the application so this is a much easier and quicker approach.

3. Security

If you use dynamic sql you have to grant the sql user SELECT, INSERT, UPDATE and/or DELETE rights to each table. If a security hole is found in the application (can anyone say their application is 100% secure?!) and the hacker uses this to exploit the database, you have just made their job a whole lot easier by giving them rights to all of the data in your database. You don't have this concern with stored procedures as you just grant execute rights to each procedure and there is no table level access.

[!]*[/!] For me this isn't a trivial task due to the number of instances of the application and amount of users so it can be very time consuming to get it rolled out and re-deployed.

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
Mark, thanks for your insight.

1. very true. you are at the will of the vendor/community to produce quality sql statements. Not being a tsql expert, I have actually learned of a few sql tricks by reviewing the ORM logs :)

do you have an example of how a SP can take advantage of index scans? I'm curious how something like this works.

2. can't argue that ;) An automated build can assist with this, but I would agree with you on this point. One question though. If you are changing the proc to add/remove columns from the result wouldn't you need to update your code as well to reflect the new column(s) added to the proc? Or are you only adjust the proc for preformance gains?

3. I could see merit from a security POV. This all depends on the sensitivity of the data.

thanks again, I'm enjoying the disucssion.

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
do you have an example of how a SP can take advantage of index scans? I'm curious how something like this works.

Yes, one of the easiest tricks is sometimes just to add a simple where clause that gives the optimiser a bit of a nudge towards using a seek rather than a scan. Have a look at how this trick alters the execution plan here:


One question though. If you are changing the proc to add/remove columns from the result wouldn't you need to update your code as well to reflect the new column(s) added to the proc? Or are you only adjust the proc for preformance gains?
Yes you're right it won't help if you are altering the structure of the data being returned, but I was thinking more along the lines of where you may want to add a WHERE clause to filter out some data that you no longer want returned.

This all depends on the sensitivity of the data.
Yes, completely. Although bear in mind that it could also mean loss of data so it's not just sensitivity but importance as well (although I would caveat this with the obvious need for a backup to be able to restore the data). Saying that, I do recall hearing stories of where data was hacked over a period of time and was being altered rather than deleted so in that case you could say you would have lost the data integrity and a backup may not save you.

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
learn something new every day!

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
thinking about this more, and watching a Tech Ed discussion about the role of procs. I'm still not seeing the benefits.

Mark, in the example of scan vs. seek. that's just tsql. I can use the id > 0 clause with an ORM which would still give me the benefit of seek (or scan, whichever is better).

continuing the thought of security. lets assume a standard public website. the web server is on the cloud and the db is behind the firewall. one arguement is: if the server is compromised, the hacker has "free reign" on the database (assuming no table level security). Procs would add layer of security, but they can still access the system. I guess this depends on how you view this type of breach. I would think compromising the web server is far more of a threat then accessing the db once the server is compromised.

there is definitely value in layers of security, but is this the correct place for the security? At this level I think it's splitting hairs.

BTW if anyone wants to see the dicussion on procs here's the link:

Jeff says he should have another video posted about Procs vs. ORM. I'm looking forward to that panel.

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
thinking about this more, and watching a Tech Ed discussion about the role of procs. I'm still not seeing the benefits.

Mark, in the example of scan vs. seek. that's just tsql. I can use the id > 0 clause with an ORM which would still give me the benefit of seek (or scan, whichever is better).
Yes, you could add a where clause but that's another change to the production application. Plus, that's only one example where a modification to a stored procedure could have a very positive effect on the speed and performance of a query, there are other methods that can have a similar effect and if you don't have control of the sql being executed then you can't apply them. Think of what happens if and when an application is scaled up. We could get literally thousands of hits per second on a database so when you multiply that figure by the amount of time a query takes to execute you start to see why fine tuning a query is necessary. Making a query drop from 500ms down to 250ms can give a massive performance boost to the application and it's servers.

Let me try to put it another way; how often do you end up checking excecution plans to continually improve the performance of your queries? If it's a lot then what improvements do you make in the ORM and how do you make them? If it's not very often, or not at all, then either the ORM is doing the job very well or you simply don't have the throughput that requires you to. If this is the case then my arguments probably aren't as appropriate to you.

continuing the thought of security. lets assume a standard public website. the web server is on the cloud and the db is behind the firewall. one arguement is: if the server is compromised, the hacker has "free reign" on the database (assuming no table level security). Procs would add layer of security, but they can still access the system. I guess this depends on how you view this type of breach. I would think compromising the web server is far more of a threat then accessing the db once the server is compromised.
So because there's the potential of a security breach we shouldn't try to protect our data by any means we can? I don't buy this argument at all, it's a bit like security through obscurity; you don't worry about securing anything as you hide all of the security in the application itself. There have been many systems that have been compromised and that compromisation usually leads to someone accessing data that they shouldn't have permission to view. At least by using a method that by default adds a trusted layer of security you neatly sidestep this without having to even think about it.

I'm not trying to slate ORM's here and I really do see their benefit in terms of development. They provide an OOP approach that people are used to, they probably speed up the initial development (although I could counter argue that whichever method you get used to using will eventually be faster than the other), and it allows developers with less SQL knowledge to interact with a database on a level that they couldn't perhaps have done otherwise. My biggest concern is that they simply don't give you the final, complete control over data retrival that is needed for large scale development.

On a related note, what I would be interested in is seeing how you would do something in an ORM that I would perhaps do in a stored procedure. I'll see if I can think of some examples over the weekend and maybe you could show me what you would do as it may be that I'm under-estimating the control you could get with an ORM.

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
Let me try to put it another way; how often do you end up checking excecution plans to continually improve the performance of your queries? If it's a lot then what improvements do you make in the ORM and how do you make them? If it's not very often, or not at all, then either the ORM is doing the job very well or you simply don't have the throughput that requires you to. If this is the case then my arguments probably aren't as appropriate to you.
This is an area I lack experience. so when given the choice of ORM or Proc I now lean towards ORM as my experience level with tsql administration is novice. I can write and read procs, but understanding what is happening under the procs. that knowledge I do not have.

So because there's the potential of a security breach we shouldn't try to protect our data by any means we can? I don't buy this argument at all, it's a bit like security through obscurity;
agreed. obsucrity (or end user stupidity) doesn't mean less security is required. Another factor is your audience. Is your application publicly available, or for internal use only? how sensitive is the data (blog vs. financial/medical)? How much time/money/quality can you afford?

what I would be interested in is seeing how you would do something in an ORM that I would perhaps do in a stored procedure.
that would be cool. i look forward to it :)

.... switching gears slightly ....
with an ORM simple CRUD operations are automatic
Code:
//select
var customer = session.Get<Customer>(id);

//insert
session.Save(new Customer {name="Acme Inc."});

//update
var customer = session.Get<Customer>(id);
customer.Name = "new name";
session.Flush();

//delete
session.Delete(customer);
if I where to wire this to procs I would need to write and set permissions for 4 procs. This is just one entity without any relational data! I can't fathom the idea that you would manually write each one, would you?

Also, whether using an ORM or SP, how do you abstract your data mapping to the application? are you using a hand rolled framework or a 3rd party data accessor? I wouldn't envision your production code looks like this
Code:
using(var connection = new SqlConnection())
using(var command = new connection.CreateCommand())
{
   command.CommandText = "exec some_proc";
   command.AddParameterWithValue("foo", bar);
   command.ExecuteNonQuery();
}
thanks again Mark

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
if I where to wire this to procs I would need to write and set permissions for 4 procs. This is just one entity without any relational data! I can't fathom the idea that you would manually write each one, would you?
Yes, I would have 4 stored procedures in this case. That's obviously a bigger overhead in terms of the initial development but it does allow you to tweak each one individually if needed. My initial development of each procedure is also sped up by the fact that I have tools to create CRUD procedures for any table(s) I need though so I often start with this approach and then modify them as I go.

Also, whether using an ORM or SP, how do you abstract your data mapping to the application? are you using a hand rolled framework or a 3rd party data accessor?
Personally, I use the Microsoft Enterprise Library. This allows me to abstract the actual type of database I'm hitting so in theory I could switch between SQL Server and Oracle with one change in the web.config file (though I've never had to do this yet and don't envisage it happening). A simple example of production code could be something like:
Code:
    Public Function InsertCustomer(ByVal Details As Customer) As Integer
        dbCommand = db.GetStoredProcCommand("InsertCustomer")
        db.AddInParameter(dbCommand, "@Forename", DbType.String, Details.Forename)
        db.AddInParameter(dbCommand, "@Surname", DbType.String, Details.Surname)
        Return db.ExecuteNonQuery(dbCommand)
    End Function

I've just thought of one example that I'd like to see how you handled with an ORM. Say you have the following table and data:
Code:
declare @mytable table (id int identity(1,1), name varchar(10))
insert @mytable values('Mark')
insert @mytable values('Jason')
How would you write something that returned all data from the table unless I passed the id in which case it only returned the matching id? Would you just have overloaded methods that added a where clause if necessary?

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
first I need to map a User to the data. using Fluent NH
Code:
class User
{
   public int id {get;set;}
   public string name {get;set;}
}

class UserMapping : ClassMap<User>
{
   public UserMapping()
   {
      Id(user=> user.id).GeneratedBy.Identity();
      Map(user=> user.name).Length(10);
      Cachable(true); //only valid if using 2nd level cache.
      WithTable("mytable");
   }
}
then we build the session factory, and sessionmanager. this is configured at Application_Start and stored in an IoC. (code not shown)

since we are both in web development I need to setup my context boundaries. For this I have an HttpModule.
Code:
class UnitOfWorkModule : IHttpModule
{
   private HttpApplication application;

   public void Init(HttpApplication context)
   {
       application = context;
       application.BeignRequest += beginUOW;
       application.EndRequest += completeUOW;
   }

   private void beginUOW(object sender, EventArgs e)
   {
      application
          .downcast<IContainerAccessor>()
          .Container
          .Resolve<ISessionManager>()
          .OpenSession();
   }

   private void CompleteUOW(object sender, EventArgs e)
   {
      application
          .downcast<IContainerAccessor>()
          .Container
          .Resolve<ISessionManager>()
          .OpenSession()
          .Dispose();
   }
}
SessionManager.OpenSession() will either open a new session, or return a current session.

now I have my webpage wired to a controller object. The Transaction attribute is utilized by my IoC to manage database transactions (and 2nd level caching with NH)
Code:
[Transactional]
public class UserController : SmartDispatchController
{
    private readonly ISessioManager manager;
    public class UserController(ISessionManager manager)
    {
        this.manager = manager;
    }

    [Transaction]
    public void Index()
    {
        //pass all users to the view;
        PropertyBag["users"] = manager
            .OpenSession()
            .CreateCriteria<User>()
            .List<User>();

        //monorail will select the User/Index.[ext] view by default. I could override if necessary.
    }

    [Transaction]
    public void DisplayUser(int id)
    {
        //pass a specific user to the view;
        PropertyBag["user"] = manager
           .OpenSession()
           .Get<User>(id);

        //monorail will select the User/Index.[ext] view by default. I could override if necessary.
    }


    [Transaction]
    public void DisplayUserWithEveryoneelse(int id)
    {
        PropertyBag["user"] = manager
           .OpenSession()
           .Get<User>(id);

        PropertyBag["users"] = manager
            .OpenSession()
            .CreateCriteria<User>()
            .List<User>();
    }
}
by wrapping these select statements in transactions i can take advantage of 2nd level cache if necessary. by using the AutomaticTransactionManagement component of Castle I can wrap multiple methods within the same transaction.

the first two members are straight forward. each resulting in 3 commands sent to the db.
Code:
begin transaction;
select id, name from user where id = @id;
commit transaction;
or
Code:
begin transaction;
select id, name from user;
commit transaction;
the third query member would produce 4 unique statements

Code:
begin transaction;
select id, name from user where id = @id;
select id, name from user;
commit transaction;

now I have 2 options to optimize this. caching, the first time a user is loaded the tuples (values) are stored in a cache of my choosing. so multiple requests would pull data from cache instead of the database. this bring up concurrency issues, but NH dose some work to expire cache and you need to determine the volatility of your data.

another option is to use Future queries. Lets update DisplayUserWithEveryoneelse(int id) to use this feature.
Code:
    public void DisplayUserWithEveryoneelse(int id)
    {
        IFutureValue<User> user = manager
           .OpenSession()
           .FutureGet<User>(id);

        IEnumerable<User> users = manager
            .OpenSession()
            .CreateCriteria<User>()
            .Future<User>();

        PropertyBag["user"] = user.Value;
        PropertyBag["users"] = users;
    }
this will combine the 2 unique queries into one round trip (if the db allows, which MS SQL does). so this will produce 3 unqiue statements. And it will not fire until I either call Future<>.Value or enumerate the collection of users.
Code:
begin transaction;
select id, name from user where id = @id/select id, name from user;
commit transaction;
these are all very simple cases so repeating this code is acceptable. but say the criteria for selecting a user becomes complex and requires parameters for say registered on> a_date and not banned (from the website).

I could create a query object for reusability
Code:
class NewUsersSinceASpecificDateWhoAreNotBanned : DetachedCriteria
{
   public NewUsersSinceASpecificDateWhoAreNotBanned(DateTime target_date) : base(typeof(User))
   {
       Guard.Against<TypeOfException>(() => target_date == null);

       AddCriteria(Expressions.GrEq("register_on", target_date))
          .AddCriteria(Expressions.Eq("status", Status.Banned));
   }
}
I can then use this query in code. and add additional parameters, or modify caching if necessary and then get the results. The Guard object is a simple check to ensure the argument is not null.
Code:
var newly_registered_users = new NewUsersSinceASpecificDateWhoAreNotBanned(1st_of_month)
   .AddOrder(Order.Asc("name"))
   .SetCachable(true)
   .SetCacheRegion("key to region")
   .GetExecutableCriteria(manager.OpenSession())
   .List<User>();
I could also project aggregations into Value Objects using ResultTransformers, but that's getting beyond the scope of the question.

There are 2 other alternatives to writing queries.
The HQL (Hibernate Query Language) which very similar to TSQL. It has shortcuts to lower the verbosity of writing queries. These can be stored in embedded resources so technically they are not part of the code and can be compiled outside of the actual application in a separate assembly. This is very similar to stored procs in that you access them by name and pass parameters. there is not OOP like the Criteria API I have shown above.

Linq2NH which is still in it's infancy. I'm not comfortable with this yet, but given a few more months of development should be top notch.

one drawback to NH is the heavy use of strings with criteria. There is a small project which is a series of extension methods using Expression<> that allow strongly typed queries. very R# (resharper) refactoring friendly.

instead of
Code:
AddCriteria(Expressions.GrEq("register_on", target_date))
   .AddOrder(Order.Asc("name"));
you get
Code:
AddCriteria(ExpressionEx.GrEq(user => user.registered_on, target_date))
   .AddOrder(OrderEx.Asc(user => user.name));
if in the future I rename name to FormalName. <F2>"FormalName" in one place and all my code files are updated!

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
Thank you for the detailed explanation.

Let me see if I've understood what is happening behind all of this though as I'm not sure if I've understood correctly or not.

then we build the session factory, and sessionmanager. this is configured at Application_Start and stored in an IoC. (code not shown)
Does that mean that once database connection is opened once and re-used throughout the whole user's interaction with the database? Or is the connection an open one that any user can use? What happens when you run "sp_who2" on the database server and look at the entries for your database?

the third query member would produce 4 unique statements
So yes you essentially used overloaded methods for each scenario. The only one I potentially have a problem with would be this one that issued an extra command that maybe isn't necessary; I would have probably ended up using:
Code:
select id, name
from @mytable
where (@id is null or id = @id)
which would return either all the records if no parameter was passed or just the relevant id if one was passed in.

another option is to use Future queries.
Does this essentially just prepare the cached execution plan so it can take advantage of it if the query is sent?

Also, if you start to profile the database to see what is being executed and spot a potential problem, what methods do you have to be able to enhance the performance of your method calls?

Also, while I remember, you may also be interested in this which can help your profiling.



Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
Does that mean that once database connection is opened once and re-used throughout the whole user's interaction with the database? Or is the connection an open one that any user can use? What happens when you run "sp_who2" on the database server and look at the entries for your database?
session factory is a singleton this holds all the meta data which translates entities to tables, fields to columns, caching configs, connection strings, etc. the session factory creates sessions. session != db connection. if you open a session, but no calls are made to the db a connection will not be opened. Sessions are very cheap to create.
In a web app I use with "session per view" model where I have 1 session per request. This is handled by the HttpModule. so all queries issued within the request use the same session, and ultimately same connection. The session becomes my logical unit of work. anything done within that session is either all committed or all rolled back.

The session manager becomes an abstraction over the session factory. within the application I'm not going to alter the session factory. all I need from the session factory is CreateSession() and CurrentSession(). the manager abstracts this.

another feature of the session manager is it can contain multiple factories. each one responsible for unique database. using the overload [tt]manager.OpenSession("key")[/tt] I can choose which one I open. this allows me to connect to MS SQL, SqLite, and Oracle without caring about the implementation details.

So yes you essentially used overloaded methods for each scenario. The only one I potentially have a problem with would be this one that issued an extra command that maybe isn't necessary; I would have probably ended up using:
Code:
select id, name
from @mytable
where (@id is null or id = @id)
which would return either all the records if no parameter was passed or just the relevant id if one was passed in.
not quite sure i completely follow? do you mean you would have 1 query that could, produce 2 different outcomes?

I think this begins to broach a philosophical difference in data access.

Get<User>(id) returns a unique instance of a single entity.
List<User>() will return a collection of users. the 2 approaches would never cross.

I could do this
Code:
User user = CreateCriteria<User()
   .Add(Expression.IdEq(id))
   .UniqueResult<User>();
which would produce
Code:
select id, name from user where id = @id
The advantage of Get is 1st level caching.
If I do this
Code:
var user = session.Get<User>(1);
var same_user = session.Get<User>(1);
I will only call the database once. after the first call the user is stored in 1st level cache which is available for the life of the session.
Code:
var user = CreateCriteria<User()
   .Add(Expression.IdEq(id))
   .UniqueResult<User>();

var same_user = CreateCriteria<User()
   .Add(Expression.IdEq(id))
   .UniqueResult<User>();
will issue 2 unique queries and i will hit the db twice. there is not need for that, since i'm in the same unit or work (session = unit of work).



Does this essentially just prepare the cached execution plan so it can take advantage of it if the query is sent?
no, future batches all queries into a single "command" (not quite sure of the actual mechanics behind it). without using future I issue a command per query
Code:
1. begin tran
2. select id, name from user where id = @id
3. select id, name from user
4. commit tran
using future<> would create this
Code:
1. begin tran
2. select id, name from user where id = @id
   select id, name from user
3. commit tran
both selects are issued as one command instead of two unique commands. I prepare statements at the SessionFactory level. with the PrepareSql = true option. now all queries will be prepared.

if you start to profile the database to see what is being executed and spot a potential problem, what methods do you have to be able to enhance the performance of your method calls?
This is my area of weakness. right now I choose the "best" query by looking at overall execution time. i believe you pointed me to an intro to profiler article in the past, but it still hasn't clicked, as I haven't focused on that area in development.

what I can do is copy the query generated by NH into QA and run the query 3/4 times and watch the execution time. from there I can tweak my select statement in QA. then I translate that back to criteria/HQL.

NH also gives me the option to write sql directly. I haven't used this option yet.

Also, while I remember, you may also be interested in this which can help your profiling.
I'm a Oren (Ayende) fanboy and love NHProf :) His Rhino.Tools set is also very useful.



Jason Meckley
Programmer
Specialty Bakers, Inc.
 
session factory is a singleton this holds all the meta data which translates entities to tables, fields to columns, caching configs, connection strings, etc. the session factory creates sessions. session != db connection. if you open a session, but no calls are made to the db a connection will not be opened. Sessions are very cheap to create.
Is this meta data built up at run time or when you build the project? I'm just wondering if any datbase changes would therefore require a rebuild or is this done "on the fly" to negate the need for a rebuild?

another feature of the session manager is it can contain multiple factories. each one responsible for unique database. using the overload manager.OpenSession("key") I can choose which one I open. this allows me to connect to MS SQL, SqLite, and Oracle without caring about the implementation details.
That's pretty much how the Enterprise Library operates. I can start up an instance of the factory with [tt]DatabaseFactory.CreateDatabase()[/tt] which will connect to what I have specified as the default database, or I can use [tt]DatabaseFactory.CreateDatabase("MyDB")[/tt] to connect to a specified instance. All of the database details like the provider are just held in the web.config file.

not quite sure i completely follow? do you mean you would have 1 query that could, produce 2 different outcomes?
Yes, that's correct but I probably didn't use a great example. Say the example I gave was actually for a search page, I may have a stored procedure such as:
Code:
select id,name,department
from users
where (@name is null or name = @name)
and (@department is null or department = @department)
In this case if I didn't specify any of the values for @name or @department then all the records from the users table would be returned, but if the user entered a name or department then that list would be filtered down. This is quite a good shortcut when it comes to things like search screens, but I'm just wondering how an ORM would cope with this or whether you would just resort to two different queries (or even one query that is built dynamically)?

both selects are issued as one command instead of two unique commands
I see, it's just returning multiple recordsets. I rarely use this approach but I guess it just depends on the scenario as to whether it produces a quicker response.

I can't remember what the profiler link was that I pointed you to before, but Denny gives a very brief intro into how to view execution plans that may help. It may take a while to get to grips with what the execution plan is telling you, but once you get a bit of experience with it then it should start to highlight any potential performance problems with a query. I'm not sure how you'll then tell the ORM about this, but I'll leave that up to you as you have a lot more exoerience in that field than me!

I'm enjoying learning some of the points about the ORM but I must admit I'm not that convinced that it would provide me anything that I'm really missing at the moment. But still, it's good to learn.

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
Is this meta data built up at run time or when you build the project? I'm just wondering if any datbase changes would therefore require a rebuild or is this done "on the fly" to negate the need for a rebuild?
this is a one time build at runtime. with web apps it's application start. with winforms and consoles it' Program.Main();

If there is a structural change to the db that effects the applications then the code would be recompiled and deployed. But I don't see this being any different than with stored procs. Another feature of NH is the ability to build your db scripts from the NH configuration. this gives you a starting point, that you can then tweak (say, in QA) as needed.

for sql query. I would optionally add criteria as necessary
Code:
var query = manager
	.OpenSession()
	.CreatCriteria<User>();

if(username != null)
   query.AddCriteria(Expressions.Eq("name", username));
   
if(department != null)
   query.AddCriteria(Expressions.Eq("department", department));	
   
var users = query.List<User>();
now this would produce different sql depending on the parameters. I could also write a HQL query (I would recommend checking out the documentation on HQL. It's very slick. reduces the verbosity of TSQL while adding some OOP to the mix.)
Code:
var query = @"
from user u
where (u.name = :name or :name is null)
 and  (u.department = :department or :department is null)";
 
var users = manager
	.OpenSession()
	.CreateQuery(query)
	.SetString("name", username)
	.SetString("dartment", department)
	.List<User>();
I'll double check with the NH group to ensure the above HQL query would work. I also want to know if that exact query is possible with criteria.

This rasies the question: why (name = @name or @name = null)? Is this a way to optionalize a stored proc, or are there preformance considerations?

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
If there is a structural change to the db that effects the applications then the code would be recompiled and deployed. But I don't see this being any different than with stored procs.
The difference with a stored procedure is that you would avoid the need for a recompile and deployment task. If I introduced a new table that now became part of a query (for example, if the users query now relied on everyone being part of a department table, I could just inner join to that table and update the stored procedure). With the ORM approach your code would change hence the need for that new deployment.

for sql query. I would optionally add criteria as necessary
That's fine, I just wanted to clarify whether dynamic sql was built based on what how you added the criteria.

This rasies the question: why (name = @name or @name = null)? Is this a way to optionalize a stored proc, or are there preformance considerations?
No, it's not for performance in this case. It's just a simple method for creating what I refer to as an "optional parameter" i.e. if you specify a value for it then it adds that clause, if you don't set one then it just ignores that clause. If you pass in null, or don't pass that parameter then the query effectively becomes:
Code:
where ([!]null is null[/!] or name = @name)
whereas if you do add a value you get:
Code:
where ('Jason' is null or [!]name = 'Jason'[/!])

Mark,

Website Design Darlington
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
The difference with a stored procedure is that you would avoid the need for a recompile and deployment task. If I introduced a new table that now became part of a query (for example, if the users query now relied on everyone being part of a department table, I could just inner join to that table and update the stored procedure). With the ORM approach your code would change hence the need for that new deployment.
Whether is a proc or ORM query change, I would want to incorporate that into a suite of unit & regression tests to ensure the change is 1. returning the expected results and 2. doesn't break any other workflows in my application.

Since I have this built into my project the overhead for testing low. Deployment is also simplified by 2 factors:
1. my projects are smaller in scope
2. automated deployment (i haven't worked with this too much, but definitely the path with some of my upcoming projects)

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
Whether is a proc or ORM query change, I would want to incorporate that into a suite of unit & regression tests to ensure the change is 1. returning the expected results and 2. doesn't break any other workflows in my application.
We would test this in our development environment and the changes would go through change control into test, staging and then production. We also have a user testing team as a lot of automated testing can miss things that a real person may spot (for example, data may be returned that is in the correct structure but not necessarily the correct data). All of this can be done without any rebuilding or deployment and changes will be recorded in both source control and the change control.

Also, in the case of the example I used, extra database priviliges wouldn't have to be granted for the new access to the departments table.

Mark,

Darlington Web Design
Experts, Information, Ideas & Knowledge
ASP.NET Tips & Tricks
 
Mark, I was able to use MrDenny's post to tweak some of my queries. thank you.

Jason Meckley
Programmer
Specialty Bakers, Inc.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top