Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

temporarily blocking crawlers/bots

Status
Not open for further replies.

Jimi2Cool

Programmer
Jul 30, 2007
25
0
0
US
I'm about to transition my site from one version to another. i have loads of content and SEO is of the utmost importance. so, for a few hours i'd like to temporarily block crawlers from indexing any of te new content in case something goes wrong and i need to revert back. i'm trying to redirect crawlers with a 503 status code to tel them i'm unavailable at the moment like this:

HttpContext.Current.Items["RealUrl"] = "/sitedown.html";

app.Response.StatusCode = 503;
app.Response.AddHeader("location", "/sitedown.html");
app.Response.End();

Which is the method i use for every other redirect i use (using 301 status codes normaly). it seems that things really mess up with the 503 code. Anyone have any ideas? even a different approach is fine. thanks in advance
 
Can you not use a robots.txt with explicitly declared pages

eg

User-agent: *
Disallow: /site/junk.html
Disallow: /site/foo.html
Disallow: /site/bar.html

or to disallow access to the whole site use

User-agent: *
Disallow: /

Then just remove the robots file when you are done


Charlie Benger-Stevenson
Hart Hill IT Ltd
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top