I'm trying to run wget against a web page but it seems to be having difficulties ...
The code returns this only ...
it doesn't matter which website I point it at, I still get the same problem ...
Help !
Code:
wget -r -l 2 -v -np -O raw.txt [URL unfurl="true"]http://www.webpage.com[/URL]
Code:
root@ubuntu:/home/babo/Desktop/spider_proj # ./spider
--15:09:01-- [URL unfurl="true"]http://www.websie.com/xx/xx[/URL]
=> `raw.txt'
Resolving [URL unfurl="true"]www.website.com...[/URL] 207.171.1.2
Connecting to www.website.com[207.171.1.2]:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: [URL unfurl="true"]http://www.website.com/xxx/xxx/[/URL] [following]
--15:09:02-- [URL unfurl="true"]http://www.website.com/xxx/xxx/[/URL]
=> `raw.txt'
Connecting to www.website.com[207.171.1.2]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 98,655 59.67K/s
15:09:04 (59.61 KB/s) - `raw.txt' saved [98,655]
Loading robots.txt; please ignore errors.
--15:09:04-- [URL unfurl="true"]http://www.website.com/robots.txt[/URL]
=> `raw.txt'
Connecting to www.website.com[207.171.1.2]:80... connected.
HTTP request sent, awaiting response... 404 Not Found
15:09:05 ERROR 404: Not Found.
[URL unfurl="true"]www.website.com/xxx/xxx/index.html:[/URL] No such file or directory
FINISHED --15:09:05--
Downloaded: 98,655 bytes in 1 files
it doesn't matter which website I point it at, I still get the same problem ...
Help !