Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Server.Execute - is this an appropriate use?

Status
Not open for further replies.

GabeC

Programmer
Apr 17, 2001
245
0
0
US
I have one web page, default.aspx, that does all the work for a web app.

All other pages user Server.Transfer with a query string to let defualt.aspx know what to do.

The reason for this is we want to keep the original file's URL in the Address bar of the web browser.

EX.
This page runs:
Server.Transfer("default.aspx?c1=1224&LinkID=1224")

The problem is default.aspx uses the output cache to greatly improve performance. Server.Transfer bypasses the HTTP Pipeline so no caching is executed.
See
If I use
Server.Execute("default.aspx?c1=1224&LinkID=1224") instead, the caching is executed as expected but I have read it is not recommended to use Server.Execute.

1) Is it OK to use Server.Execute in this situation?
2) Any suggestion on another way to accomplish this?


Thanks,

Gabe
 
The reason for this is we want to keep the original file's URL in the Address bar of the web browser.
I really think you are over complicating the application by using Server.Execute just to get the appearance of the above. Personally, I don't think it's a good idea to do the above as you are going against the conceptual model of the web (where every page corresponds to a single URL).

If you still want to go agasint this model, you could maybe use a frameset as that would achieve the same results, but then you are bringing more problems into the equation.


____________________________________________________________

Need help finding an answer?

Try the Search Facility or read FAQ222-2244 on how to get better results.

 
Thanks ca8msm for your input but it is the customer's requirement, not ours, to keep the URL showing the original page.

The reason is so the end user can bookmark any page in the site without a bunch of query string information. This is also to make the URL more user friendly.

Ex. is more easily remembered by the end user than
As for the point about going against the model of the web, this is done all the time.

Most News sites are a good example of this. Look at for instance.


The file in the above URL doesn't really live at that path. All the information after the is really a query string but query strings don't get crawled by search engines so the URL Rewriting technique is used as a solution. The URL is really hitting one page and a database call is being used to retrieve the correct news story.

You can find an explanation of how to accomplish this in .Net here

Thanks


Thanks,

Gabe
 
You don't have to explain it for my benefit, I know what URL rewriting is but I didn't think that's what you were talking about as you said you had one page for the entire application.

URL rewriting isn't normally used in this fashion; yes it's used to provide the user with an easy to remember URL, but most people don't use one web page to run the entire site as there is normally more than one type of layout/process to do and putting this into one file over complicates matters.


____________________________________________________________

Need help finding an answer?

Try the Search Facility or read FAQ222-2244 on how to get better results.

 
query strings don't get crawled by search engines
Also, that's not strictly true; it does make it harder for search engines but not impossible. See this comment from google:
google said:
We're able to index dynamically generated pages. However, because our web crawler could overwhelm and crash sites that serve dynamic content, we limit the number of dynamic pages we index. In addition, our crawlers may suspect that a URL with many dynamic parameters might be the same page as another URL with different parameters. For that reason, we recommend using fewer parameters if possible. Typically, URLs with 1-2 parameters are more easily crawlable than those with many parameters. Also, you can help us find your dynamic URLs by submitting them to Google Sitemaps. To learn more, visit our FAQ at


____________________________________________________________

Need help finding an answer?

Try the Search Facility or read FAQ222-2244 on how to get better results.
 
I'm with ca8msm. I don't suppose we can make the recommendation you chew the client out for the bad idea. :)

With URL rewriting, you'd have to have multiple pages to get it to work with readable URLs. For example, with URL rewriting, you can send a user who types in:

Code:
[URL unfurl="true"]www.mysite.com/10/15/2005/Calendar.aspx[/URL]

to

Code:
[URL unfurl="true"]www.mysite.com/Calendar.aspx?month=10&day=25&year=2005[/URL]

and you can send a user who type in:

Code:
[URL unfurl="true"]www.mysite.com/Fred/ShowUser.aspx[/URL]

to

Code:
[URL unfurl="true"]www.mysite.com/ShowUser.aspx?name=Fred[/URL]

...but you can't have:

Code:
[URL unfurl="true"]www.mysite.com/Default.aspx[/URL]

...redirect to both:

Code:
[URL unfurl="true"]www.mysite.com/ShowUser.aspx?name=Fred[/URL]
[b][i]and[/i][/b]
[URL unfurl="true"]www.mysite.com/Calendar.aspx?month=10&day=25&year=2005[/URL]

After all, how could you tell with one URL which page a user actually wanted to go to? In this way, the user will lose any ability to navigate directly to any page but the default for default.aspx, which might be darn inconvenient to your users.

Granted, you could use server-side processing to redirect do different pages behind the scenes based on server-side variables, but you still lose the ability to directly navigate to a page just by using the URL in a browser.

Nevertheless, for such a situation you don't want to use Server.Execute(), you want to use HttpContext.RewritePath() or User Controls which you can dynamically load, and which can take advantage of output caching on a per-control basis.

That said, if you decide you really do want to go with the single page URL approach,
 
Does anyone else have any other ideas on how to get the output caching to work with Server.Transfer or have a solution that doesn’t needlessly change the customer’s requirements?

I am a bit disappointed to see there are still developers out there that think they should dictate the business solution instead of supplying the customer with what the need.

I will add a little more clarification to my first post.

There is nothing wrong with this requirement set forth by the customer. It is my job as a developer to deliver the functionality the customer needs.

The requirement by the customer is for each page to have its own unique URL. Since each page is identical in format but contains different information extracted from the database, the development team has put all the functionality into the default.aspx page and depending on what (or if) information the query string contains, the page is populated with the selected data. This has been done to make maintenance much simpler and quicker.

If the user goes directly to default.aspx, he will get a valid page. If the user goes to page1.aspx he will get a valid page also, just with different data because it performs a Server.Transfer(“default.aspx?c1=1&linked=1”).

The problem I have asked assistance with is how to keep this same model - one page does all the work but the user has different and distinct URLs while at the same time getting the data caching functionality to work. The Page directive’s Output Cache does not get invoked in this instance because Server.Transfer bypasses the HTTP Pipeline.

Is there another way to invoke the HTTP Pipeline so that the Output Cache is used or is there another way we can invoke data caching and stay within the same model describe here.


Thanks,

Gabe
 
I am a bit disappointed to see there are still developers out there that think they should dictate the business solution instead of supplying the customer with what the need.
It's not the customer's requirements that we are saying is wrong and in fact they are perfectly legitimate requirements; it's your implementation of URL Rewriting that is questionable (as per BoulderBum's examples). The MSDN article that I linked to above, shows you exactly how you could do what you want to do if you read it...


____________________________________________________________

Need help finding an answer?

Try the Search Facility or read FAQ222-2244 on how to get better results.

 
I would have spoken up and said there's something goofy with the customer's requirements! [lol] Remember, the customer is always wrong!

Seriously, though, sometimes you get requests for certain implementation details that aren't very practical, or show a lack of technical understanding. This kind of seems like one of those cases to me, because we essentially have a very basic problem (navigation) that's being solved by an overly complex process (the default.aspx thing). The result given the suggestion is a less pleasant navigational experience for the user, potential problems with things like client-side caching, and increased development time because simple, built-in things like output caching and user authentication all of a sudden become big technical issues (Server.Transfer() bypasses security).

Anyway, as I doubt I've converted you to my "hate the customer" way of thinking, what I'd suggest you try is using HttpContext.RewritePath() within an HttpModule instead of Server.Transfer() on the Page. This will probably take care of the Http pipeline problem for you. Detail:


The Microsoft implmentation also makes it easy to manage new URLs through regex in web.config.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top