Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations biv343 on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Westwind Web Connection server on LAN: can SPT (MS SQL) work ? 4

Status
Not open for further replies.

Steve Yu

Programmer
Nov 6, 2021
128
US
Hello colleagues,
We have a working Web Connection server on a LAN (internal web) that employs .dbf and .idx only currently.
Is it possible to add SPT coding to access the MS SQL tables that are on the same LAN concurrently ?
This is our effort to expand exiting desktop applications to include access from the web.
Help appreciated.

Steve Yu
 
Our version of the WWWC server is very heavily modified
It would be importsant to know what you have overridden from the normal WC processing to finally answer for example, whether SET PROCEDURE would only need to be done once.

I don't think so based on a simple assumption: don't think Rick Strahl or other West Wind WC developers would implement something that could only process one request at a time. Parallel processing combined with a state that's kept for all requests without reinstating that state as it's done with a session cookie in HTTP that's sent in with every request and is therefore not a server side state but has to be coming from the client everytime.

The only chance for a state that's not just a virtual state that's indeed reinstated with every request is WC running in a multithreaded DLL. Then multiple instances of the wwServer class that's always the entry point of all requests for WC itself, which then routes it to your wwProcess based class Process method - multiple wwServer instances of the same mtdll would run in the same process in multiple threads. I wonder if they would all route the request to the same wwProcess instance, though or also multple instances of your class in the respective thread. All running in the same process would be able to see the same PRGS/FXPS of an initial SET PROCEDURE, see the same PUBLIC variables, could reuse the same SQL Server connection handle and even see the same properties even if multiple wwServer objects provide the parallelism but all route to the same object instance of your class derived from wwProcess.

The other way of parallel processing would be by parallel processes, then you don't have any state in common in all requests. You'd easily be able to find out experimenting with this, I'd just recommend you try more than just two successive requests, as there can always be the case you only hit the same process or object instance by chance. So a good experiment would do requests in fast succession, ideally asynchorincally in parallel from multiple clients, even though that's not reflecting the final real world scenario, later.

Unless you know by digging into the architecture of WC and your "heavily modified" version, you have to do this "nonrealistic" experiment to ensure. It's not províng, but making very plausible, when you always can reuse the same connection handle towards SQL Server, that it's all runnning in one process server side. I'd say that's not making the best usage of the web server, but that's not your concern, that's a WC concern, if any.

You don't profit very much of it, because being able to reuse the connection handle still doesn't make the overall sql processing much faster, to get the fastest processing you'd still need to have WC on the same server as the MSSQL server. I also don't know what experience you have with the time necessary for SQLCONNECT or SQLSTRINGCONNECT in your company. If that's so high you're making it a topic of being able to reuse a connection handle, there's something to investigate in your network and server architecture, it should not take long to make a connection.
 
Last edited:
Okay, I think I found it. The discussion about multithreading vs multiprocessing already was a topic in another thread here, in which I linked to a Web Connection web blog article of Rick Strahl:


This quote is telling how WC essentialy does prallelism:
There other alternatives in how to run EXE servers – like running a pool manager of instances so that instances are loaded and then cached rather than having to be restarted each time. I use this approach in West Wind Web Connection, and by removing the overhead of loading a VFP instance each time and keeping VFP’s running state active, raw request throughput performance actually rivals that of DLL server with this approach. However, this sort of thing only works if you can control actual activation as part of the application you are building (as I do in Web Connection)
This is saying Web Connection uses a ppol manager that has multiple WC processes running, then you can't rely on having any state available in requests, a secondary request can be handled in another process than the first and thus you can't reuse anything, not even a connection handle.
 
Okay, I think I found it. The discussion about multithreading vs multiprocessing already was a topic in another thread here, in which I linked to a Web Connection web blog article of Rick Strahl:


This quote is telling how WC essentialy does prallelism:

This is saying Web Connection uses a ppol manager that has multiple WC processes running, then you can't rely on having any state available in requests, a secondary request can be handled in another process than the first and thus you can't reuse anything, not even a connection handle.
We do have a public facing WWWC server in the cloud now that's been working many years for our vendors and customers (same structure as the proposed new project which will be an internal web only).
As to multi-thread, without much knowledge of how WWWC works as an .exe or .dll, we simply fire up multiple copies of the .fxp version and let it fly. As I mentioned before, I do see some coding that we added that seem to be shared by sessions. I'm now experimenting to see how SPT will fare within this setup.
 
It would be importsant to know what you have overridden from the normal WC processing to finally answer for example, whether SET PROCEDURE would only need to be done once.

I don't think so based on a simple assumption: don't think Rick Strahl or other West Wind WC developers would implement something that could only process one request at a time. Parallel processing combined with a state that's kept for all requests without reinstating that state as it's done with a session cookie in HTTP that's sent in with every request and is therefore not a server side state but has to be coming from the client everytime.

The only chance for a state that's not just a virtual state that's indeed reinstated with every request is WC running in a multithreaded DLL. Then multiple instances of the wwServer class that's always the entry point of all requests for WC itself, which then routes it to your wwProcess based class Process method - multiple wwServer instances of the same mtdll would run in the same process in multiple threads. I wonder if they would all route the request to the same wwProcess instance, though or also multple instances of your class in the respective thread. All running in the same process would be able to see the same PRGS/FXPS of an initial SET PROCEDURE, see the same PUBLIC variables, could reuse the same SQL Server connection handle and even see the same properties even if multiple wwServer objects provide the parallelism but all route to the same object instance of your class derived from wwProcess.

The other way of parallel processing would be by parallel processes, then you don't have any state in common in all requests. You'd easily be able to find out experimenting with this, I'd just recommend you try more than just two successive requests, as there can always be the case you only hit the same process or object instance by chance. So a good experiment would do requests in fast succession, ideally asynchorincally in parallel from multiple clients, even though that's not reflecting the final real world scenario, later.

Unless you know by digging into the architecture of WC and your "heavily modified" version, you have to do this "nonrealistic" experiment to ensure. It's not províng, but making very plausible, when you always can reuse the same connection handle towards SQL Server, that it's all runnning in one process server side. I'd say that's not making the best usage of the web server, but that's not your concern, that's a WC concern, if any.

You don't profit very much of it, because being able to reuse the connection handle still doesn't make the overall sql processing much faster, to get the fastest processing you'd still need to have WC on the same server as the MSSQL server. I also don't know what experience you have with the time necessary for SQLCONNECT or SQLSTRINGCONNECT in your company. If that's so high you're making it a topic of being able to reuse a connection handle, there's something to investigate in your network and server architecture, it should not take long to make a connection.
It would be importsant to know what you have overridden from the normal WC processing to finally answer for example, whether SET PROCEDURE would only need to be done once.

I don't think so based on a simple assumption: don't think Rick Strahl or other West Wind WC developers would implement something that could only process one request at a time. Parallel processing combined with a state that's kept for all requests without reinstating that state as it's done with a session cookie in HTTP that's sent in with every request and is therefore not a server side state but has to be coming from the client everytime.

The only chance for a state that's not just a virtual state that's indeed reinstated with every request is WC running in a multithreaded DLL. Then multiple instances of the wwServer class that's always the entry point of all requests for WC itself, which then routes it to your wwProcess based class Process method - multiple wwServer instances of the same mtdll would run in the same process in multiple threads. I wonder if they would all route the request to the same wwProcess instance, though or also multple instances of your class in the respective thread. All running in the same process would be able to see the same PRGS/FXPS of an initial SET PROCEDURE, see the same PUBLIC variables, could reuse the same SQL Server connection handle and even see the same properties even if multiple wwServer objects provide the parallelism but all route to the same object instance of your class derived from wwProcess.

The other way of parallel processing would be by parallel processes, then you don't have any state in common in all requests. You'd easily be able to find out experimenting with this, I'd just recommend you try more than just two successive requests, as there can always be the case you only hit the same process or object instance by chance. So a good experiment would do requests in fast succession, ideally asynchorincally in parallel from multiple clients, even though that's not reflecting the final real world scenario, later.

Unless you know by digging into the architecture of WC and your "heavily modified" version, you have to do this "nonrealistic" experiment to ensure. It's not províng, but making very plausible, when you always can reuse the same connection handle towards SQL Server, that it's all runnning in one process server side. I'd say that's not making the best usage of the web server, but that's not your concern, that's a WC concern, if any.

You don't profit very much of it, because being able to reuse the connection handle still doesn't make the overall sql processing much faster, to get the fastest processing you'd still need to have WC on the same server as the MSSQL server. I also don't know what experience you have with the time necessary for SQLCONNECT or SQLSTRINGCONNECT in your company. If that's so high you're making it a topic of being able to reuse a connection handle, there's something to investigate in your network and server architecture, it should not take long to make a connection.
I'm refreshing my own memory on the public facing server we are running now. I see coding that sets value of variables (not session variables) that remain available for all pages.
 
I also see that the intention is there, but I wonder how that would be working based on multiprocessing. May also depend on which version of WC you're using and what you modified. For example, limiting the pool to a single process.
 
I also see that the intention is there, but I wonder how that would be working based on multiprocessing. May also depend on which version of WC you're using and what you modified. For example, limiting the pool to a single process.

Below on the left is one of the WC copies of .fxp that is running; on the right shows the coding behind where it seems variables m0smtp, etc are loaded, passed and made available to each .dm page that is called subsequently.

wwwc.PNG
 
That's what I called reinstantiating. A public variable would simply be a state that's available to the next request with no code at all. Simply by being run in the same process that did not end in the previous request.

So try that in your Process method:
Code:
If Vartype(gnCounter)="N"
   gnCounter = gnCounter +1
Else
   Public gnCounter
   gnCounter = 1
Endif

 Response.Write("request #" + TRANSFORM(gnCounter) )
 Response.Write("process #" + TRANSFORM(_vfp.ProcessId) )

Do you understand what you asked for? I ask myself that. If you want to only once make a connection to SQL Server and use it in all requests, that must all happen in the same process. A connection handle you make in one request in one processid won't be available in another request, if that runs in a different ProcessId.

What you show there is a variable created by the code that calls your process class. That's just right now assigned in the current request. That's not there from the previous request, so it's not a kept state from the previous request, is it?

If you see the request # increase to values higher than 1, then you will a ) surely also see the same process# and b) would only have one process for all web requests or - if you have a larger pool, just be lucky the request was received and routed to the same process. That's why I said run successive requests ideally from multiple clients in very fast succession.
 
Last edited:
That's what I called reinstantiating. A public variable would simply be a state that's available to the next request with no code at all. Simply by being run in the same process that did not end in the previous request.

So try that in your Process method:
Code:
If Vartype(gnCounter)="N"
   gnCounter = gnCounter +1
Else
   Public gnCounter
   gnCounter = 1
Endif

 Response.Write("request #" + TRANSFORM(gnCounter) )
 Response.Write("process #" + TRANSFORM(_vfp.ProcessId) )

Do you understand what you asked for? I ask myself that. If you want to only once make a connection to SQL Server and use it in all requests, that must all happen in the same process. A connection handle you make in one request in one processid won't be available in another request, if that runs in a different ProcessId.

What you show there is a variable created by the code that calls your process class. That's just right now assigned in the current request. That's not there from the previous request, so it's not a kept state from the previous request, is it?

If you see the request # increase to values higher than 1, then you will a ) surely also see the same process# and b) would only have one process for all web requests or - if you have a larger pool, just be lucky the request was received and routed to the same process. That's why I said run successive requests ideally from multiple clients in very fast succession.
ProcessID is a vague concept to me, pardon the ignorance. Is it a unique # assigned to each request, unless in some cases of having a large pool (that's another unknown to me) ?
Another way to verify: if I'd add display of ProcessID to the session display (left side of the screenshot, I should get different processID for each .dm call ?
But based on your analysis of my coding, I should be OK regardless to just add my SQL connect (and my set procedure to ... ) in the same spot where those variables are initialized, then I should be able to issue SQL commands within any page that's being called ?
 
ProcessID is a vague concept to me, pardon the ignorance. Is it a unique # assigned to each request...
I see how you get the idea, but no, that's not web server or WC or http related, process ids are given to everything running on the OS.

In contrast the wwProcess class you are asked to use as base class to write your own Process class uses Process as in TO process a web request, the verb, not the noun of a process running on the OS.
 
Last edited:
I see how you get the idea, but no, that's not web server or WC or http related, process ids are given to everything running on the OS.
Let me try adding my SQL connect as I mentioned in my last reply and see what happens.
 
If you plan on changing the code that finally does oProcess.Process(), notice what you do is something that's also done for every request. If you think that code is only done once at start of WC, you have a bad understanding about how WC works.
 
If you plan on changing the code that finally does oProcess.Process(), notice what you do is something that's also done for every request. If you think that code is only done once at start of WC, you have a bad understanding about how WC works.
Completely understood now. Yes, it's done every time a request is made and not one time only when WC starts.
But this is coincidently what I wanted anyway: SQL connect and other logic coded once in .Process() and passed to every request (so no redundant coding needed within each request, and there are many).
Fully understand SQL connect,etc will have to be executed for each request; but given our traffic load that should not be an issue. The saving of not having to include all the overhead coding in each request is most beneficial.
Thanks for the help, always enjoy seeing your many analyses, here and elsewhere. Learned a lot.
By the way, I did try this new coding of SQL connect and it worked perfectly as expected. Solved one major bug when one of my SQL command library routines suddenly crashed (same routine has run hundreds of time on desktop apps for years with no issue). It turned out UDFPARMS was set to (or defaulted to ?) reference in WC instead of value and that caused the major fail (with a cryptic error message).
 
Last edited:
I think you're programing the response creation in the method that's foreseen by WC to only decide about which class should process the request, so you can use the architecture pattern of MVC with multiple controllers. But that's up to you. What you should not forget is to also SQLDISCONNECT, or you'll have a connection congestion and at soe point, even if idle connections would be closed automatically, could get no further connnections. In the end you don't do anything that's kept as a state between requests, though.
 
I think you're programing the response creation in the method that's foreseen by WC to only decide about which class should process the request, so you can use the architecture pattern of MVC with multiple controllers. But that's up to you. What you should not forget is to also SQLDISCONNECT, or you'll have a connection congestion and at soe point, even if idle connections would be closed automatically, could get no further connnections. In the end you don't do anything that's kept as a state between requests, though.
Good point, thought about asking that question before. will add disconnect logic.
Is there a time-out period for SQL to disconnect if no activity ? that would be an issue.
 
Is there a time-out period for SQL to disconnect if no activity ? that would be an issue.
Execute sp_configure without any options to see what is configured, not only regarding timeouts.

There's actually only a timeout literally named connection timeout that determines within what time the connection has to be established, exceeding that the connection fails, that timeout is overlooked by the ODBC driver, as the most likely timeout is when the server is even too busy to react to a connection. There's also a query timeout within which (a) query result(s) should be established and if not, gives an execution timeout error, by default 10 minutes, that doesn't close the connection but still will need to be handled.

Once you have a connection, reasons for server side disconnects still exist, though, not only by admins closing connections manually, obviously also when restarting the server. A deadlock, I think, cannot only choose a winner getting a result and a looser getting an error, I think it can also cause a disconnection. A crashing application, when it crashes "gracefully", so that at least the process that ends by a crash closes all file handles and connections, too. But that's obviously a case where that's not the main problem and good to have to not have dangling connections etc.

It's also a reason to have a mechanism of reestablishing a connections even in desktop applications that usually keep the initial connection open for the whole application session. Anyway, you will get informed at least by triggering an error in the next SQLExec, use of a cursor adapter or remote view, etc. that the connection you try to use is closed or in case of query execution timeout the result is an error instead of a result.

With a connection established at the start of each request you will much more unlikely encounter such problems anyway, but the point is that too many open and never closed connections could lead to denying any further connections. And you should actually set it to a sensible value, because your server can go into very unresponsive state, whereas it doesn't crash by that, which can be even less pleasant than a simple downtime you solve restarting, if your server isn't dimensiond for 32000 concurrent users, then also don't allow as many. Even when an idle connection is not a direct load to the server performance.

Any open has a close, any declare a release, etc. etc., so of course, and especially since you know you establish a connection for each new request, you also close it. Or we get back to the quest of how to not need to do that repeatedly. I'd say it has more pros than cons for web development. The reason is the same as the architectural choice to have a stateless server: It can handle each request independently of previous request (except for things like user session handling, which I already pointed out how they are stateful though they are stateless) and to be able to route requests, if the demand requires it and to do load balancing.

The browser, that's usually the frontend for a user, also is merely a viewer of HTML than it is a desktop application in the classic sense of it, therefore we talk about web applications, don't we, and make a distinction. A browser is not only handling a single website, too, but a general purpose web application viewer, it compares much more to a desktop of web applicaitons as to the main window of a desktop application, isn't it? So you also have to think of it as that and not act as if you're alone, neither on the server nor in the browser.
 
Last edited:
Execute sp_configure without any options to see what is configured, not only regarding timeouts.

There's actually only a timeout literally named connection timeout that determines within what time the connection has to be established, exceeding that the connection fails, that timeout is overlooked by the ODBC driver, as the most likely timeout is when the server is even too busy to react to a connection. There's also a query timeout within which (a) query result(s) should be established and if not, gives an execution timeout error, by default 10 minutes, that doesn't close the connection but still will need to be handled.

Once you have a connection, reasons for server side disconnects still exist, though, not only by admins closing connections manually, obviously also when restarting the server. A deadlock, I think, cannot only choose a winner getting a result and a looser getting an error, I think it can also cause a disconnection. A crashing application, when it crashes "gracefully", so that at least the process that ends by a crash closes all file handles and connections, too. But that's obviously a case where that's not the main problem and good to have to not have dangling connections etc.

It's also a reason to have a mechanism of reestablishing a connections even in desktop applications that usually keep the initial connection open for the whole application session. Anyway, you will get informed at least by triggering an error in the next SQLExec, use of a cursor adapter or remote view, etc. that the connection you try to use is closed or in case of query execution timeout the result is an error instead of a result.

With a connection established at the start of each request you will much more unlikely encounter such problems anyway, but the point is that too many open and never closed connections could lead to denying any further connections. And you should actually set it to a sensible value, because your server can go into very unresponsive state, whereas it doesn't crash by that, which can be even less pleasant than a simple downtime you solve restarting, if your server isn't dimensiond for 32000 concurrent users, then also don't allow as many. Even when an idle connection is not a direct load to the server performance.

Any open has a close, any declare a release, etc. etc., so of course, and especially since you know you establish a connection for each new request, you also close it. Or we get back to the quest of how to not need to do that repeatedly. I'd say it has more pros than cons for web development. The reason is the same as the architectural choice to have a stateless server: It can handle each request independently of previous request (except for things like user session handling, which I already pointed out how they are stateful though they are stateless) and to be able to route requests, if the demand requires it and to do load balancing.

The browser, that's usually the frontend for a user, also is merely a viewer of HTML than it is a desktop application in the classic sense of it, therefore we talk about web applications, don't we, and make a distinction. A browser is not only handling a single website, too, but a general purpose web application viewer, it compares much more to a desktop of web applicaitons as to the main window of a desktop application, isn't it? So you also have to think of it as that and not act as if you're alone, neither on the server nor in the browser.
Well said. I think we will be OK because SQLconnect is made only when a request is made and disconnect will be performed upon completion of each request in a tightly controlled sequence. All other scenarios are either due to app errors that will be handled by our error routine or system failures which will be rare ( in the case of web browser timeout WWWC process() won't even be called so SQL won't be affected at all); of course nothing else will work without a server restart.
 
I think you're programing the response creation in the method that's foreseen by WC to only decide about which class should process the request, so you can use the architecture pattern of MVC with multiple controllers. But that's up to you. What you should not forget is to also SQLDISCONNECT, or you'll have a connection congestion and at soe point, even if idle connections would be closed automatically, could get no further connnections. In the end you don't do anything that's kept as a state between requests, though.
Something weird is happening that contradict the stateless assumption: During my test with the new SPT logic, on first request a variable A is undefined (type 'U') and initialized for SQL connect logic and all went fine and the request finished with a SQL Disconnect before exit; However on second request of the same page, that variable A is now type 'N' (see image).
That variable A is my flag for an active SQL session so no SQL connect will be performed while it should be (because I disconnected the SQL at end of each request).
So it seems all variables will stay 'Public' from request to request.
Does this make sense ?
 

Attachments

  • wwwc.PNG
    wwwc.PNG
    29.2 KB · Views: 5
What are you doing in which methods of which classes?
And I don't see anything from your screenshot that tells me anything. I see the U and N in the screen that displays the WC window. That's in the process that runs the WC, server side, right? That's one process, but that's not a worker that should process the requests, or your server is a singlethreaded process.
 
Last edited:
Our version of the WWWC server is very heavily modified and I do see some shared logic by all sessions; not sure how that all tie together. We've been doing it on the fly and it has worked well so far.
I already addressed this once:
myself:

It would be importsant to know what you have overridden from the normal WC processing to finally answer for example, whether SET PROCEDURE would only need to be done once.
Now it sees that way, congrats. On the other hand please revisit what I also said back then. While WC comes open source the reason is full transparancy of what it does, not enabling "heavy modifications". Usually if you change code of WC itself, that makes it a PITA to upgrade to a newer version. Rick Strahl is not programming new WC versions, I think, he left into the .NET world and maintains products in that world, so you may never get into the situation of a WC upgrade, but that's not justifying modifications of the actual product itself. The architecture foresees your code in your classes and it seems you didn't ever get into the intended ways of using WC and programming for and with it, instead of modifying it itself.

It akes getting support quite hard, as long as you don't even have an idea of where you did which modifications that let your modified WC work differently.
 
Last edited:
Let's do a little experiment to give you a better understanding about what a connection handle is and what it's not.

Open up VFP9 and do a SQLSTRINGCONNECT() or SQLCONNECT() to obtain a connection handle. It'll very likely be 1.
Code:
h=SQLSTRINGCONNECT("...your connection string here...")
?h && very likely displays 1.
SQLEXEC(h,"SELECT @@VERSION as version","SQLresult")
? Strconv(SQLresult.version,6)

Start VFP9 once more and don't connect to MSSQL there. Instead try to use the connection handle you got in the first VFP9 IDE with the code above. In this second IDE command window only do the SQLEXEC:
Code:
SQLEXEC(1,"SELECT @@VERSION as version","SQLresult") && change 1 to whatever h is in the first IDE

You'll get the error "connection handle is invalid". Back in the first IDE you will still be able to use it, though.

The point is, you can't share a handle this way, shared connections are a thing, but that works totally different and never by sharing the numeric handle value. The handle is not the connection, it's just an identifier for a connection and it's bound to the client process that made the connection with SQL Server.

So the only way a connection you only make once would help you, if all processing also takes place in the same OS process. Then you get a singlethreaded, single process of WC and it won't scale up. While you're not in need of that right now, the idea is restricting what you could do in a future where scalability becomes an issue.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top