Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Anybody out there interested in talking about MS multitier dev issues? 7

Status
Not open for further replies.

johnk

MIS
Jun 3, 1999
217
0
0
US
I have a small group (6 of us) of business software developers. Since VB4 we have done nothing but OOP & DCOM. There does not seem to be a large number of folks doing this, yet it may well be the most important technology (along with CORBA) to come along in a while. Now that Microsoft has published its certification specs for Win2000 Distributed Applications it is clear that COM+ is the MS way of the future.<br>
<br>
We have learned much about DCOM, OOP & multitier - most of it the hard way. I'd love to find some more of you to exchange info and experience with.<br>
<br>
John K
 
Hi VB400,<br>
<br>
Excellent post. You wrote:<br>
<br>
&quot;The Proxy tier has the same interface as the Data-Centric tier (same classes and public methods). The reason for this tier is to allow us to cache fairly static data on each user's machine thus speeding up the application.&quot;<br>
<br>
I'd like to hear more about your caching data on the user machine. We simply load some arrays at Form Load. One issue is that when the source database is updated, the array is not. We are considering adding a &quot;refresh&quot; function the user can cause to run. Does your arrangement take care of that?<br>
<br>
JohnK
 
Hi John,<br>
<br>
Yes, we have a refresh function. Here's how we do it:<br>
<br>
On the Data-Centric tier, when a change is made to any of the files that we consider &quot;static&quot;, we update a simple utility-type table indicating that the users' local files need to be updated. When a user first logs in to the application, the refresh function is called and checks this utility-table. If the table has changed then we recreate the data to send back to the caller. The utility table contains three columns<br>
<br>
1. User's workstation ID<br>
2. File/Table name<br>
3. Boolean field indicating &quot;Refresh Yes/No&quot;<br>
<br>
When the user wants to get data from the data-centric component, they actually create an instance on the Proxy DLL which turns around and creates an instance on the Data-Centric tier. All parameters are sent using string variables (serialized data).<br>
<br>
For those objects that we feel are fairly static, we save the string variable that contains the serialized data in a local text file (it is amazingly fast). Now when the user requests that data, the Proxy object retrieves the string from the text file and sends it back to the Business Object.<br>
Of course, we check for the existence of the file first and if it is not found then we call the data-centric tier request the data and create the local text file.<br>
<br>
The only downside to this approach is that it is not real-time, meaning that if a change is made to the &quot;static&quot; data (usually by an app administrator), all other users must exit the app and log back in in order to see the changes. But that's exactly why we do this for only some of the files.<br>
<br>
The great thing about this approach is that the proxy tier can grow to have additional logic for distributed data. For example, you may have a third party application where you call it with a street address and it returns a valid 9 digit zip code. This call can be made from the Proxy tier without impacting your Business Objects or Data-Centric tier. And so on...<br>
<br>
I hope this helps! <br>
<br>
Looking forward to further communication with you and others on this topic and I'm preparing a few questions of my own!<br>
<br>
P.S. I am so glad that you started this thread :)<br>

 
Hi VB400,<br>
<br>
I have 2 questions, first is like John's, basically why do you have a proxy tier? From my reading this tier sits on the client, its purpose is to cache data, why do you need this extra tier could the methods not be encapsulated in the UI and make use of the data tier (why rewrite the functions in the data tier)?<br>
<br>
Also, You say that have the foillowing architecture on the client:<br>
<br>
On the user's workstation, I have the following:<br>
<br>
User Interface tier (.exe)<br>
Business Objects tier (.dll)<br>
Proxy tier (.dll)<br>
<br>
Saying that the proxy tier is the same as the data tier (Am I right). This system does not seem particularly like a DCOM architecture. <br>
<br>
Secondly, why don't you used MTS for the other DLLs? I also thought that MTS could only deal with DLLs not EXEs.<br>
<br>
PLEASE don't take this as criticism - I'm only confused as I'm still learning and have not heard much talk of a proxy tier.<br>
<br>
C<br>
<br>
<br>

 
Hi Calahans,<br>
<br>
You're absolutely right about the ability to cache data in the UI tier; however, by doing so, we would put the burden of saving and retrieving the data on the UI developers which could lead to problems. You would have to make sure that all UI developers know that they need to save the data locally if they want fast access. Additionally, every time you develop a new UI, you would have to remember to repeat that functionality.<br>
<br>
In order to discuss the purpose of the proxy tier, we must remember that we're designing our applications using the concept of Business Objects. In actual implementation of Business Objects (BO), we split the BOs into two tiers: UI-Centric tier and a Data-Centric tier. Thus, each tier is one half of the complete Business Object. Shielding the UI tier or even the UI-Centric tier from data-access functions is an important design concept.<br>
<br>
For example, say I'm a UI developer and I need a list of Departments in various forms. I should be able to do the following:<br>
<br>
Dim objDepts as Departments<br>
Set objDepts = New Departments<br>
objDepts.Load<br>
<br>
As a UI developer, I don't want to worry about where the data may exist. I want the Business Object to deal with that stuff. The Business Object is now comprised of UI-Centric, Proxy and Data-Centric tiers.<br>
<br>
I actually started out our application without a Proxy tier and was saving &quot;static&quot; lists in global variables. As time went on I created more of these global variables. The problem with this approach is that everytime the application started, the global variables were being built which caused a significant &quot;pause&quot;. Additionally, this put too much emphasis on how much memory was needed on each user's machine. I could have saved the data in text files and retrieved them as needed, however, refreshing those text files became a problem.<br>
<br>
Additionally, we're planning on having remote sites using our application. In order to do that, we must take into consideration &quot;down time&quot;, meaning that the connection between the remote sites and the application server in the central office may go down. Although I haven't completely thought through that process, it seems that having a Proxy tier will allow us to either use a temporary local database (e.g. Access) or make use of MSMQ -- in either case, the UI-Centric tier is unaware of any changes.<br>
<br>
The bottom line is that there are many ways to accomplish the same thing. For me, the Proxy tier seems to fit in perfectly with Business Objects design specially when they are very light weight.<br>
<br>
The proxy tier has the same interface as the Data-Centric tier but it does not have any of the data-access logic. I think of it as a &quot;logical router&quot; between the two main parts of the Business Object.<br>
<br>
You mentioned that using the Proxy tier &quot;does not seem particularly like a DCOM architecture&quot;. Can you elaborate on that!<br>
<br>
As for MTS, I only use MTS on the application server machine and you're correct, we have an ActiveX DLL running in MTS (that was a typo in an earlier posting -- sorry about that!). I only use it on the application server as it is the only tier that is shared by many users and it gives us a great visual as to what objects are in use at any moment. It is certainly not required by the application specially that we're not doing any type of commitment control (Transactions) -- not yet!<br>
<br>
I hope I didn't cause more confusion with this post. I'm just as new to this methodolgy as the next person so I hope others will post their opinions so that we all can learn -- that's why I love this thread.<br>
<br>
Thanks!<br>

 
Hi VB400,<br>
<br>
I'm a manager, not anymore much of a techie, so be wary of this accordingly.<br>
<br>
I'm not very comfortable with your structure as I understand it. Having your application manage the detail of remote copies of your home database to be able to operate when connections were broken sounds like taking on a huge burden for your programming. Is database replication a possibility?<br>
<br>
As to performance issues dealing with the volume of data transfer between the User Interface and Business Services tier, our concentration has been to limit the volume, not to have cache buffers. Once we reconciled to not being able to have bound controls in multi-tier, we used techniques to limit transfer volumes to only what was necessary to populate the screen. Large validation tables to be resident on UI for speed were stripped down to only the essential columns.<br>
<br>
All of our recordsets are &quot;resident&quot; in the data access tier with portions of them passed ween needed through the BS tier to the UI via arrays wrapped in one variant variable. All of this has enabled good performance, even with fat clients over dial up connections. We have not provided for connectionless operation.<br>
<br>
Please keep posting. I'm strongly interested in what works for others.<br>
<br>
Regards, JohnK
 
Hi John,<br>
<br>
Oddly enough, I'm not sure if I disagree!<br>
<br>
I mentioned in my previous posting that I didn't work through the remote access process as the project isn't quite there yet. However, for some environments it seems like there must be a backup solution for when connections are down. For example, we have a system where operators are taking over a 1000 calls per day to schedule appointments. If we do this from remote sites and the connections are down -- we would have a major problem. I'm not sure what is the best answer but rejecting that many calls because &quot;the system is down&quot; is not one of them -- I'll get fired by lunch time )-:<br>
<br>
I'm interested in your thought regarding Database replication, so when you get a chance, please expand on that a bit!<br>
<br>
As for the performance issue, I agree with you 100%. To expand on that; however, I would like to state that we're very careful in passing the minimal information between the client machine (UI, UI-Centric and Proxy tiers) and the application server (Data-Centric tier). This is obviously the most expensive data transfer hit (performance-wise). Whatever static information that is cached and saved on the desktop, its transfer between the Proxy tier to the UI-Centric tier to the UI (say to load a combobox) is incredibly fast. Again, the cached data by the Proxy tier is saved in a text file &quot;as is&quot; meaning as one string (the one coming back from the Data-Centric tier). For example:<br>
<br>
Open App.Path & &quot;\Departments.txt&quot; for output as #1<br>
write #1, strDepartments<br>
close #1<br>
<br>
Where strDepartments is the string coming back from the Data-Centric tier which includes a list of all departments. When the same user requires the list of departments multiple times throughout the day, we don't have to go across the network to get the data as the data is local. Neither the UI nor the UI-Centric objects know whether the data is cached or coming from the database; however, the difference in speed is incredible.<br>
<br>
I would like to see some suggestions on the first problem -- what to do when critical remote sites lose communication with the Application (or Data) Server.<br>
<br>
Hope to hear something soon!<br>
<br>
If you would like to discuss this issue in person, please contact me at <A HREF="mailto:regalsys@earthlink.net">regalsys@earthlink.net</A> and we can exchange phone numbers then post our discussions on this thread for everyone's benefit (hopefully).<br>

 
VB400,<br>
<br>
We have not yet used replication but expect to in time. So my suggestion is my speculation.<br>
<br>
Also, if your database is DB2/400 I have no idea how replication could work with, say, a remote database resident on a Windows/98 system. Maybe IBM makes DB2 for Windows/98? Or maybe you could use Windows/NT or Win2000 (for workstations). I think IBM has DB2 for them.<br>
<br>
This string has gotten pretty long. Maybe a new string on replication in the database forum to pursue?<br>
<br>
JohnK
 
VB400,<br>
<br>
I might be able to shed some light on the disconnected remote issue. We have a project that we are under taking but at this point our ideas are also speculative. Typically in the database, there is a unique ID in the database that represents, in your case a &quot;CALL&quot;. If you know the system is going to disconnect from the LAN, you have the application request a bank of ID's to so call &quot;TAKE ON THE ROAD&quot; with them. Then you log calls against the ID's that you checked out. The key is to find a good range for the number of ID's to check out. Too few and the user can't do thier job and you're gone by lunch time. Too many, and it's over kill. When the system is in remote mode, is uses the ID's from the pool it created when it checked them out. When it returns to the LAN for docking, it passes all the new calls to the system for updating and then checks in any unused ID's it checked out. There is usually 1 central process that would be in charge of the number check in and check out scheme. If you have more than one, you're asking to get dupes in the numbering scheme. The systems that are connected to the LAN use the same scheme to get the number from. In this manner the 'Number / ID Generator' knows what numbers are checked out and which ones it can had off to applications that need them.<br>
<br>
Such in your case, when a system goes remote, you might check out 2000 ID's for Calls. If they use 1500, 1500 records get updated in the database when the remote comes back to the LAN and the 500 unused ID's go back in the system pool in the 'ID Generator' on the server.<br>
<br>
This process would tend to fragment the ID's somewhat, but really the ID's are just unique ID's so even if we used a random number algorithm to hand out ID's, they would still be scattered around.<br>
<br>
Our vision on this was to have the remote request 'X' amount of numbers. The ID Generator would look to see what numbers are available. For instance if I needed 100 ID's, I might get back ID's 506 -&gt; 588, 648, 650 -&gt; 651, 702 -&gt; 717. (I think that's 100 numbers, it should be) Then the remote application uses them one at a time. The unsed ones are return upon docking and syncing. Then ID's are requested one at a time from the ID generator as long as the remote is connected to the LAN. The main idea behind this was to eliminate duplicates and enforce referential integrity in the database.<br>
<br>
We looked at using sockets to do this over TCP/IP. This is just a concept, we haven't implemented it yet.<br>
<br>
Hope that helps!<br>
<br>
Steve Meier<br>
<A HREF="mailto:sdmeier@jcn1.com">sdmeier@jcn1.com</A>
 
hi john............<br><br>i am bascially developing an internet product...... using VB components ad the middle tier and ASP as the front end.. and SQL 7.0 as the backend with ADO..........<br><br>We are trying to move on to COM+ but the only hitch being... that using VB components i cannot avail of object pooling...... which is a major setback.... since we definetely need object pooling...........<br><br>Right now we r using MTS and DCOM....... but the only hitch is that i cannot specify roght now how many objects MTS should create........ Does it create objects till it runs out of memory or is there a max limit......<br>if there is a max limit does it queue the rest of the objects<br><br>if anyone could throw light on this it would be a great help
 
rain,<br><br>Sorry, we have no experience with MTS or ASP.&nbsp;&nbsp;Our apps are not (yet) web enabled.&nbsp;&nbsp;And we built our own data access module which minimizes calls for connection objects (a decidedly non-standard approach).<br><br>Sounds like you have chosen a good atchitecture.&nbsp;&nbsp;Please let us know how you progress.<br><br> <p>John Kisner<br><a href=mailto:jlkisner@jlkisner.com>jlkisner@jlkisner.com</a><br><a href= > </a><br>
 
Hi!!<br><br>Does anyone in this exciting forum have any idea why it is not possible to use DCOM across Domains?? Which means that if I try to create an instance of an object on a server that Im not logged onto I get an &quot;Access Denied&quot; Error!! If I log in there is no problems what so ever!!<br><br>All settings in the DCOM Config is set to accept all users. But I think that that is NT Users on the server! Isnt there a way to grand access to users who are not logged on to the server????<br><br>An Answer that solves the question saves my day and a bounch of others too!!<br><br>I have experienced that the MTS makes it possible to create the object even though Im not logged on to the server, but still denies access when I try to use the Interfaces of the object!! Why is that?? Any clues??<br><br><br>Thanks for your time!<br><br>Regards<br><br>Nikolaj<br><br>
 
Nikolaj,&nbsp;&nbsp;We had a similar problem, if not exactly the same.<br><br>Our bottom 2 tiers resided on an NT Server.&nbsp;&nbsp;It was the server permissions that blocked us.&nbsp;&nbsp;We had to give users administrative permission before objects could be instantiated.&nbsp;&nbsp;But we sure didn't want all users to be able to blunder into administrative tasks.<br><br>We assigned all users into a user group (for convenience).&nbsp;&nbsp;We gave that group administrative priviledges.&nbsp;&nbsp;But then we also denied that group access to the appropriate list of drives, directories, etc. which kept them from doing any administrative damage.<br><br> <p>John Kisner<br><a href=mailto:jlkisner@jlkisner.com>jlkisner@jlkisner.com</a><br><a href= > </a><br>
 
Hi Johnk!!<br><br>Thanks for the response but the thing you suggest as the solution is exactly what I dont want to do! I want to let a machine that is not logged on to the server where the middle tier object is supposed to be instantiated, create it, the middle tier!<br><br>Did your users log in to the server before creating the object or what??<br><br>But thanks anyway!!<br><br>The thing is that I want to make a WEB app that grand access to a database that resides on a server!! Maybe Im handling the problem wrong so if anyone has a suggestion on how to do I really want to know?? Maybe ASP is the thing or what? I dont know much about ASP so??
 
Nikolaj,<br><br>Sorry, our development experience is limited to LAN and WAN environments, all with VB code in the UI tier.&nbsp;&nbsp;During the coming months we will be researching the steps necessary to web enable our apps.&nbsp;&nbsp;Maybe there will be posts here that will help us both.<br> <p>John Kisner<br><a href=mailto:jlkisner@jlkisner.com>jlkisner@jlkisner.com</a><br><a href= > </a><br>
 
Getting and Persisting Collections...

Like many others who've contributed to this thread, I'm trying hard to wrap my head around N-Tier architecture. Now I'm hoping to get some outside perspective on handling collections (aka multiple record resultsets)

My UI tier needs to display an entity that has multiple child entities. Let's assume it's the classic Invoice / Line Item scenario. The UI invokes a BO for the Invoice similar to oInvoice.Load(InvID)

The oInvoice object has as one of its properties a collection of LineItem business objects, so that the UI can use a For ... Each construction to loop through the LineItems collection and work with the individual members.

A LineItem BO exists because there are business rules to be applied when the user attempts to make edits, and those rules would be the same regardless of what the parent object is (might be oPurchaseOrder).

I can't seem to noodle out how this scenario is handled in an N-tier case. Does each element in the colLineItems collection invoke it's own call to the data-tier to get saved (since some might have been updated but others not). If so, doesn't that mean a potentially very large number of individual calls to the data tier when a single UpdateBatch would have worked in a C/S environment? What's the approach?

TIA for any feedback. Just reading the entries on this thread (dating from over a year ago!) has been very helpful.

Steven
 
I'd go down the road of creating an invoice object which has an array of invoice lines. The object you are dealing with is an invoice, there should not be a line object.

Read the data from database and parse into an array and return to the front end. This requires no extra db calls.

HTH
 
Steven, I'm not really much of a techie, but when we started down the COM & DCOM road we found that the &quot;academic&quot; models could lead to needlessly complex logic structures. Calahans approach is also the one we have been using since VB4 and serves us well.

We don't have an invoice class that &quot;knows all the detail&quot; of an invoice, our invoice header class knows all about the invoice header and footer and our calling logic instantiates a separate class for the detail. The detail class can return an array containing all detail rows (we almost always avoid referencing properties directly to avoid slow run times in our distributed architecture). In our particular design each row to be added or updated is passes to the database one at the time through a method in the detail class (called, of course, from the same module that instantiates both classes).

 

Steven,

We do it differently -- what a surprise, huh!

We don't use arrays to hold the detail line items, we use a collection of a lineitem class. In reality, this is not much different from Cal and John's method. The concept behind either method is that we minimize traffic between the user's desktop and the application server where the data-centric tier resides.

The idea is this, when you retrieve an invoice from the server, retrieve all the detail with it in the same call. Do NOT return the header record by itself and then ask for each individual detail record separately as this will kill your network. Of course, in your InvoicePersist object on the server, you can add a parameter to your Fetch (or Load) function that you can specify whether or not you want detail records, in the event you have no need for the detail records.

In the front-end, you make all your changes to the lineitem objects (including additions and deletions) then when you want to apply your changes, you package them up with the invoice header and send them to the data-centric tier for saving -- again, one call to the server.

- Frontend requests an Invoice
MyData = objServer.Fetch Invoice#

- Backend does the following
Retrieves the header record
Retrieves the detail records
Serializes all the data and return it to the caller as one string

- Frontend de-serializes all the data

- When the user is done updating, all the data is serialized and sent to the backend in a call to a Save method.

- The Save method de-serializes the data and make database updates.

This is the overall flow that works great for us and I think most people have the same structure. Where we all may differ is maybe in the way we serialize data or in the implementation of the UI-centric BO. The most importnant point here is to minimize the travel across the network.

I hope I'm somewhat right about all of this; afterall, the application is already in production <grin>.

Tarek
 
Thanks for the great responses.

Tarek, I agree that everyone is focused on minimizing travel across the network, and that the distinction between using a collection of child objects as opposed to an array (while important for other reasons) is not important here.

In either case we're talking about wanting to make a single call across the network to do fetches and updates, yes? You talk of 'serializing' the data. Am I understanding correctly that 'serializing' means converting an object's state into a single string in some fashion or another?

The devil's in the details, right? <g> I'm lost at the point where the data-centric layer actually makes calls to the database.

Assume that my BO has serialized itself and made a call to the data-centric layer. This is the component that is responsible for making the connection to the database and invoking the appropriate stored procedures. (I think this matches the generally approved approach...)

Further assume the data-centric layer has a method that receives this serialization. It de-serializes and comes up with updates that need to be made to the parent object (Invoice Header) and many child objects (Line Items). Some child objects need to be added, some updated, some deleted.

Now, I'm very tempted to use ADO disconnected recordsets here. I could see making a preliminary call to the SQL Server to populate an initial (perhaps heirarchical) recordset, coding a loop that applies any changes received from the BO, and then executing a BatchUpdate.

However, I'm told that it's better to use SQL Server's stored procedures for reasons of security and performance. Given that, the best approach I can seem to come up with is:
[tt]
-create a command object for spUpdateHeader
-create parameter for each updated property
-execute command object
-For each LineItem
-create a command object
-If LineItem is Deleted
-Set CommandText to spDeleteLineItem
-Set parameters
-Else
-Set CommandText to spUpdateLineItem
-Set parameters
-execute command object
-Next LineItem
[/tt]
My problem with this is that I'd be making so many individual calls to the database. That's bad, right? Or maybe not if the data-centric tier was physically located on the SQL Server?

What am I missing?

Thanks again for the quality posts you folks are doing!

Steven
 

Unfortunately, We're using DB2/400, not SQL Server. Although, DB2/400 is capabale of stored procedures, we're not using them at all so I'm not the right person to talk about them.

I'm currently learning about SQL Server myself and I've also read that you should use Stored Procedures (SP) when you can. But as I've learned over the years that ONE solution almost never solves ALL problems. It seems to me that using ADO maybe a good solution here. Yes, it maybe slightly slower than SPs but in the scenario you described above, it might actually be faster.

I believe that John's applications also use SQL Server (as well as other DBs). He might be in a better position to talk about this.

John, any thoughts?

P.S. You're correct in your assumption about serialization
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top