I use Google Reader (RSS aggregator?) a lot to capture interesting links and store them as "starred" for later processing. What I am looking for is a way to get all the links in the starred items, create a new html page (if necessary), spider through all the links within that new page and extract links to specific hosts into a text or html file.
How can this be done, and is Perl the right tool of choice?
How can this be done, and is Perl the right tool of choice?