Oops, I didn't realize I have so many choices. I thought this problem in a too simple way then.
The actual goal is to find downloadable links in a web page. How the web page is opened doesn't mind, i.e. IE or netscape or something from scratch. Let me raise a simple example here:
is open by IE(for example). I know on this page has some links could be downloadable. The application program can locate those downloadable links and open them automatically thus leading to download files.
My previous question was to locate those links among all the links first then try to download them. Maybe I used a wrong approach at the very beginning.
You need to write something that looks for <A HREF="target"></A> or whatever in the html source code. Then you can get the target of the link and do whatever you want with it.
I'd suggest using msxml3.dll and loading the html source into the DOM (document object model)- that way you can search for the <A> tags really easily.
If you want to write a program to download from links, you don't really need a browser at all. What (I think) you need to do is:
1. Download the raw HTML from the site (try wininet functions)
2. Parse the HTML to find all the (file) links. Try what Pyramus said or look around for other methods of parsing HTML.
3. Go through the links (or display them to the user) and start downloading (again using wininet)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.