Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Anybody out there interested in talking about MS multitier dev issues? 7

Status
Not open for further replies.

johnk

MIS
Jun 3, 1999
217
US
I have a small group (6 of us) of business software developers. Since VB4 we have done nothing but OOP & DCOM. There does not seem to be a large number of folks doing this, yet it may well be the most important technology (along with CORBA) to come along in a while. Now that Microsoft has published its certification specs for Win2000 Distributed Applications it is clear that COM+ is the MS way of the future.<br>
<br>
We have learned much about DCOM, OOP & multitier - most of it the hard way. I'd love to find some more of you to exchange info and experience with.<br>
<br>
John K
 
Steven, Of course, much depends on the particular needs of your environment and app requirements. Our requirements included absolutely minimizing database locking, achieving best response times with distributed deployment, and avoiding dependence on any one database. This last item meant we could not use stored procedures.

With interactive screen programs we always use disconnected recordsets (no locking) which reside in the data access tier which normally sits on the database machine. When the UI tier passes down instructions to delete or update we reread each row that has to be deleted or updated. At that point we compare each column we just read with what is in the original recordset to detect any situation where another user had updated a database value that this operation was also going to change (a warning is passed back to UI).

Network traffic is certainly a real issue. We find using arrays to hold recordsets and then passing them as a variant parameter to be acceptable performance, and the data is delivered in a really convenient form (like binding to grids). Sometimes we pass really large arrays between tiers, but frequently pass only the few columns needed for a particular purpose.

For batch processing we open a separate DB connection and put all processing inside one big transaction (begin/commit). This would not be acceptable for really large batches, but our apps work well.

This is somewhat of a non-standard architecture, but it works well for us. I post it here just in case some piece of it might be useful. Changing to disconnected recordsets provided a particulrly good performance boost.
 
Hi Steven,

Just thinking again and I have to disagree with my original suggestion. I think that you should have a InvoiceLine object. This object would obviously have a invoiceid property, which will be used on a search and will return a collection of invoiceline objects.

You should have a save method on this which should persist the information to the backend. The save method of your application layer should create an database layer object which should persist the information to the backend. This should have the same net effect on Tarek mulitple updates.

I think that this may be a more elegant solution than my first suggestion .....
 
John, you said, &quot;When the UI tier passes down instructions to delete or update we reread each row that has to be deleted or updated. At that point we compare each column we just read with what is in the original recordset to detect any situation where another user had updated a database value that this operation was also going to change (a warning is passed back to UI).&quot;

This approach, which has the powerful advantage of handling concurrency issues, pretty much precludes the idea of reconnecting a disconnected recordset and issuing an rst.UpdateBatch call, yes?

In fact, the approach you describe is very similar to what I was planning... the difference being that instead of comparing each column I had intended to only compare the records' timestamps. (In fact, I'd intended to make that comparison part of the stored procedure so that there needs to be only one call to the database for each record.)

Steven
 
Calahan - everyone's entitled to change their mind!

What you describe is pretty much where I think I'm headed. It has the added advantage of allowing me to occasionally work with one of these line item doohickeys on a stand-alone basis. (Okay, in my app it's not really a line-item, it's an Address object.)

I think I feel a little better about having the data-access layer making many calls to the database... seems it's a fairly common approach after all.

Steven
 
Steven, The reason we compare each column is because we are only interested in checking the columns we have changed. Remember, we use this technique only in interactive programs, batch posting use a more conventional approach.

Say User 1 accesses a customer record to change a phone number. While that is under way User 2 updates that same row with an address change. It will be quite alright for the phone number update to be made. However, if User 1 had changed the address, then a notification message would be passed back to the user, his screen would be refreshed with the new contents of the database, and he will be invited to remake his changes.

In practice this condition almost never occurs. Of course this would vary with number of concurrent users and specifics of the app. However, we think our apps are very scalable. The gain is that we almost never have record locks.

Your observation about us not reconnecting disconnected recordsets is correct. In our approach we never do that. We use them only for speed.
 
John, I think I see where you're going. Yours is an even more refined approach, and I can see the advantages for scaling to many users.

I'm guessing that you're saving the original value of each column in order to do the comparison at the data-access layer. If so, mind sharing how you're saving the values?

Steven
 
Steven, (Array 1). Since we use arrays to pass data between tiers, we always create an array immediately after reading a recordset. That remains unchanged.
(Array 2) contains data from the UI tier that is passed back down the line.
(Array 3) contains data from the reread immediately before update.

a. We compare 1 vs. 2 to see if we have changed anything. If not there is no need to update.
b. We compare 1 vs. 3 to see if anyone sneaked in a change to any column during our processing. If not then we are free to make our update. If so then:
c. We check 1 vs. 2 to identify exactly which columns we changed, then 1 vs. 3 if any of those columns are the same ones as those found in b. If not we update array 3 with our changes and update the row. If so, we notify user and invite him to remake his changes.
 
Dear John, I've been developing VB apps for a bit now, and found it fairly hard to get into the &quot;oop&quot; way of doing things. Since the apps have few users (two dozen) at most, I usually just use 2 tier.

What I do is classes, compiled into the final exe program, as oposed to ActiveXs (dll and exe). Would you consider this to be the normal procedure ?? In what regards this kind of technology, i feel pretty lonely (tech wise), even when talking to MCSDs.
 
DannyB,

To me, I think compiling into a single exe is just fine when there is no need to deploy in a distributed environment. And deployment is simpler.

Since we always provide for the possibility of a middle tier to be installed on a separate machine we must compile separately. Although we don't have comparative performance data, I have the feeling that performance is not much affected when our 2 executables wind up installed on the same machine (as compared to if we had compiled them together).

One tip on performance. There are some techniques which run fast when compiled together (or even installing 2 executables on the same machine) but could be really slow if you later deploy to a distributed environment. Doing Lets & Gets for property transfer is one, we always wrap multiple properties into an array and transfer as a single variant parameter. Another is not defining your parameters as ByVal, that causes additional round trips between tiers for the operating system..

Some in this string seem to think I am pretty deep into this stuff. In truth I am a manager of some good tech people who help keep me straight. I seldom actually write the code. So be sure and ask for other opinions.
 
Jonh, thanks for the answer, roughly what I thought the answer would be. I've done much experimenting with multitier and distributed objects. Most of the professional situations I have to solve don't need much distributing, and most of the times are based on only one server. So what I've been doing (projects for clients) is separating the user inteface (forms), from the business and processing main parts (build out of classes) and SQL (MS) backend.

If I need to pull a tier of the current design, it shouldn't be too difficult, after considering network round trips.

Thanks again for answer and let me know if I can help.


Thanks
 
Hi John, when you speak about transactions with Access do you mean updates ?? I can't see any problems with using 2 DISTINCT connections to DataBases. I usually stick to SQL statements, since I still believe that's what how you can have the most power. Sometimes I use ADO's addnew method to insert new records otherwise its SQL all the way.


Daniel.Barreto@Netc.pt
 
Hi everyone. I've been reading a lot about n-tier development. From what I've read in books, articles, and this thread, it seems to me that most people agree on the division of the tiers (1 - UI, 2 - Business Services, 3 - Data Services, 4 - Database System).

However, there seems to be two approaches in the design and deployment of the second tier (Business Services):

1. Deploy the business services tier on the application server. To minimize network traffic between this tier and the UI, design this tier to be as stateless as possible. This seems to be the approach followed by John K of this thread and Mary Kirtland who wrote Designing Component-Based Applications, published by Microsoft Press.

2. Design the business services tier using a &quot;pure&quot; OO approach, where objects have properties, and everything is an object, e.g. an invoice object contains a collection of child objects, each representing a line item. Deploy the business services tier on the client workstation, since this approach would cause too much network traffic if it were deployed on an application server. This seems to be the approach followed by Steven Taylor and VB400 based on Rockford Lhotka's Visual Basic 6 Business Objects (Wrox).

The first approach is better when it comes to maintenance. To update the business services tier, just update the application server. The second approach would require updating all the client workstations.

However, the first approach seems to compromise on design. Stateless objects which are all procedures and have no properties doesn't sound much like an OO design to me. (That's how I designed my C modules in college. <grin>)

Also, the second approach allows for a richer and better designed UI. Let's say you have a business rule that restricts a product code to 10 characters, and you want the error caught the moment the user enters the 11th character in a text box. That first approach would require that you implement this rule in your business services tier AND your UI.

I'm curious which approach most of you use. Also, what are the other advantages and disadvantages? Are there other ways to go about it so you get the best of both worlds? Please correct me if anything I have stated here is inaccurate. Thanks.

Jason
 

Jason,

The Business Services tier (aka UI-Centric Business Object) resides on the client machine along with the UI. This tier is NOT stateless. The Data Services tier (aka Data-Centric Business Objects) resides on the application server and this IS stateless.

When a client requests a record(s), the Business Services tier sends a request to the Data Services tier on the application server -- minimal traffic. The Data Services tier retrieves the data, serializes it and sends back to the caller -- also very minimal traffic. The Business Services tier de-serializes the data and the object is stateful at that point.

Deploying the Business Services tier on the application server would cause every method or property call to go between the client and application server. this would be a traffic disaster.

The bottle neck is always the application server -- its resources are shared between all the clients and thus it must be used wisely. This is why we create an object on the server, ask it to retrieve the data and immediately destroy the object.

I hope this helps! Tarek
 
Hi Tarek,

I like the ideas presented in Rockford Lhotka's Visual Basic 6 Business Objects. I think stateful UI-Centric business objects with properties is a more elegant design compared to stateless UI-Centric business objects.

My biggest problem with this approach is that it forces us to put the UI-Centric business objects on the client workstation. Updating a business rule would require updating all client workstations.

Other books I've read emphasize that an n-tier design allows us to update business rules by just updating the business objects on the application server. (Of course the authors of these books do not use stateful UI-Centric business objects.)

Do you have any suggestions on how to make updating business rules less of a chore? Thanks for your response.

Regards,
Jason
 
Concerning the issue of where various tiers reside:

I'm not particularly current with the literature but am sharing our experience.

I don't think the question of state vs. stateless needs to control residence. While it is obviously easier to deal with state-maintaining objects if they reside on the client, there are ways of preserving state info for remote objects.

For our non-web enabled applications we instantiate an object for each calling object, so each instantiated object maintains state from the calling object. This permits us to deploy our tiers across any network configuration necessary to handle the volume. All 4 can reside on one machine for a small stand-alone application, we can distribute each of our 4 tiers to separate machines for max performance, or any other combination.

Another way to maintain state when an instantiated object is called by multiple calling objects is through the use of disk storage. Each calling object must provide identifying data so the appropriate state data may be retrieved.

Distributed architecture is so flexible that different needs can be approached in different ways. That's a good thing, but it sure doesn't make things so easy to understand and use.



John Kisner


 
Hi John,

So are you saying we can use the design described by Tarek above (based on Lhotka's book) and still put all business objects on the application server? Most of the books and articles I've read recommend making objects on the application server stateless to make performance acceptable.

In your experience, if my business objects contain state, properties, and maybe collections of child objects, and I put these on the application server, how well will the system scale?

Thanks.

Jason
 
Jason, I can see that it would make a very big difference how many concurrent users of specific objects you are designing for.

It seems to me that the major resource being saved by the use of stateless objects is memory. Multiple instantiations could also reduce response performance in our arrangement. As to memory, current prices make it a non-issue for us.

As I said before, my knowledge is practical and not academic. What we are doing works very well for our market which does not include systems with more than 50 users or so. We may well run into scaling problems if we get into very large systems. Also we recognize that providing web enbled apps for large numbers of concurrent clients is another ball game alltogether.

The use of DCOM and a true multi-tier architecture does provide an important part of scalability. If a middle tier gets bogged down by volume, just slide another box (they don't cost much these days) onto the network and direct some of the clients to the new box. Almost as easy as pop beads. Same for the Data Access tier.

Hope this helps.

John Kisner


 
Sorry for jumping in abruptly like this, but just found this thread...

As to memory, current prices make it a non-issue for us.

Sure. There's 'scalability', then there's 'SCALABILITY' :)

So up to 500 or 1000 users, you can just throw memory at it. But over that, you need to have the correct architecture, and that's where stateless/stated comes in.

Regarding your suggestion of &quot;just add another box&quot; -- this works well if you're able to tell some portion of your users to point at machine &quot;B&quot; instead of &quot;A&quot;. If you're using a load-balancing device (like a F5 Big-IP), or the MS Windows Load Balancing software (WLBS), they have a tough time keeping track of state, so the application needs to be, again, stateless.

Chip H.
 
Chip,
> Sorry for jumping in abruptly like this, but just found this thread

Unlike you, I've been reading this thread for a while now, this is just my first post in it. I have been bothered by some of the previous comments that have been posted here and have wanted to post my thoughts before now, but somehow I just could not find the words. When I read your post it was like listening to my own thoughts.

> so the application needs to be, again, stateless.

I was way happy reading your post until that statement. Of course I believe you just misstated what you meant to say.

You probably mean that the components of the application need to be stateless for scalability. Of course the application itself can maintain state using other techniques otherwise it would be fairly useless.

-pete
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top