Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Detecting remote downtimes 1

Status
Not open for further replies.

Sleidia

Technical User
May 4, 2001
1,284
FR

Hi guys :)

How could I use PHP to check the running status (up/down) of hundreds of remote websites every 10 minutes (using CRON) with little CPU use and little bandwidth waste as well?

Thanks for your suggestions :)
 
Do you want to verify the website page is available, or if the httpd service is available?

For detecting the actual site, i'd look for teh smallest image available and use lynx or wget.

To just check that port 80 is available, nmap:

example
Code:
if [ `nmap -p 80 -T insane remote_host|grep -c "80/tcp open"` -eq 1 ] ; then
        echo "up"
else
       echo "down"
fi


______________________________________________________________________
There's no present like the time, they say. - Henry's Cat.
 
Sorry, note - not really PHP, its probably alot easier to do from shell and format the output in PHP, but you'll already be aware of shell_exec() and friends for use in PHP (bad_idea imo).

______________________________________________________________________
There's no present like the time, they say. - Henry's Cat.
 
I guess you could also check from PHP using fopen() to ;)

______________________________________________________________________
There's no present like the time, they say. - Henry's Cat.
 
I add httpd to the snmpd.conf as a process in the proc section. This way all I have to do is issue an snmpget for the correct prTable entry and I know exactly how many instances of httpd are running on the target box. Mix in a little MRTG and a status screen and I have a record of all the times httpd disappeared, etc.

-T

signature.png
 

Thanks, KarveR :)

I'll try with fopen() but I don't know how much stress it will put on the CPU, considered that hundreds of websites would be tested all at once every 10 minutes.

Also, I will have to target a tiny file on each website in order to avoid messing with the bandwidth of all those websites.

Tarwn : I think that you didn't get that the websites I need to test are not mine :)
 
Here's another idea:

The suggested retrieval of the smallest image still includes data that is transferred in excess of the actual HTTP headers. You can limit the actual request to just the minimum HTTP headers. That way you can tell that the web server is responding and not stress it beyond the minimal headers.

My recommendation is a HTTP HEAD request as defined in RFC2616 Section 9:
RFC2616 said:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The metainformation contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request. This method can be used for obtaining metainformation about the entity implied by the request without transferring the entity-body itself. This method is often used for testing hypertext links for validity, accessibility, and recent modification.
 
Why all at once? Does it matter if a site is checked at 1:10,1:20,1:30... or it it's at 1:10:03,1:20:03,1:30:03...

I'd use something like Nagios. It can randomize the times after it starts up, specifically so it doesn't try to check everything at once.

The basic checks include testing for a live web server (tcp port check) and checking for valid content returned. Checking for the number of httpd processes would require a plugin on the web server, but that's not hard to deploy. Remember that a running httpd doesn't definitely mean that the server is working properly.

My experience with SNMP and Nagios is that SNMP is much slower.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top