Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

HTTP request 1

Status
Not open for further replies.

audiopro

Programmer
Apr 1, 2004
3,165
GB
I am calling a HTTP request but I am getting a cached value returned.
Clearing the cache solves the problem but how do I force the true value to be returned.
I am pretty sure I managed to do this previously by adding the random number to the URL.

Code:
DRIVELLOC="[URL unfurl="true"]http://www.mysite.co.uk/cgi-bin/addget.pl?call=CheckClick&ran="+str(rand())[/URL]

*!* HTTP REQUEST
LOREQUEST=''
LCHTTP=''
LOREQUEST = CREATEOBJECT('MSXML2.XMLHTTP.6.0')
LOREQUEST.OPEN("GET", DRIVELLOC, .T.)
LOREQUEST.SEND(.NULL.)
DO WHILE LOREQUEST.READYSTATE # 4
	DOEVENTS
ENDDO
LCHTTP = LOREQUEST.RESPONSETEXT
release LOREQUEST
wait window LCHTTP nowait

Keith
 
I remember we had a discussion about perl (.pl suggests you call a perl script) and IIS already. Or was it Apache? Doesn't really matter.

Caching mechanisms of web servers will provide a cached response, if responding to the same request URL, so adding a random number can help, yes. But it's not the only way a web server may give you a cached result.
Caching can be done on any level.

This is more a question about perl and/or whatever web server and/or wahtever other component used. Maybe even a database accessed in the perl code returns a cached value, which of course would be independant on the "ran" parameter, as that's not influencing the action of the perl skript and of the database or file cache or memory cache or whatever cache is in effect, here. Maybe a proxy cache, although that'd also be tricked by a varying dummy parameter. You'll need to debug where the caching really is in effect, you'll need to debug that on the web server and maybe even on the perl level, the foxpro level of your code will not answer that question.

Bye, Olaf.
 
Cheers Olaf
When I call the perl script directly I get the correct result every time.
Clearing the browser cache solves the problem but obviously I can't do that every time the request is called.

The perl script is called from an VFP Epos system to notify the sales counter when a click and collect order has been placed. The counter is already notified by email but they have requested a visual clue be shown on the sales screen. The whole process works but of course, they really need to see the correct information.

Keith
 
So the browser cache is the cache we need to address? The browser cache indeed might judge the same request by the URL before any parameter, up to the "?".

OK, a response will not be put into the browser cache, it it has a http header signaling expiration, eg by sending an expiration date/time in the past. So the problem would need to be solved in the perl code. adding a http header.

In PHP you could use header("Expires: Mon, 14 Apr 2014 00:00:00 GMT"); for example. Any date in the past.

Bye, Olaf.
 
Turns out that it is not the caching after all.
You mentioning that params after the ? may be ignored got me thinking, so I tried a rewrite rule on the URL and all is working as required.
Not sure if the random number is actually required but as it is working, I see no reason to risk it breaking in the future.

Code:
[URL unfurl="true"]http://www.mysite.co.uk/addget/CheckClick/[/URL][COLOR=#EF2929]rand[/color].htm"

Thanks for the help again.

Keith
 
OK, so the red portion is now a random number and you use URL Rewriting? OK: that'll stop caching in any case, too. As it'll trick any brows, proxy, webserver etc.
If that's solving the problem it's quite good proof caching was the problem.

Bye, Olaf.
 
>The browser cache indeed might judge the same request by the URL before any parameter, up to the "?".
I think you misunderstood that one.

It doesn't mean that any cache will ignore parts of the URL after a ?, indeed most cached do take different parameters as different page, and don't return a cached result, because normally different parameters do mean different results. But some proxies and browsers don't care for that, maybe when inspecting the similarity of some responses they "learn" something they should not learn.

Anyway, URL rewriting can include a random URL part as parameter, not recognized as parameter by caches located before the URL rewrite process on the server side. So what you did is indeed a solution to force the request to be interpreted as unique new request on almost all caching levels, but the ones acting after URL rewrite. There obviously is no problem on that level, so congrats, this idea fixed it.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top