Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Sample codes on detecting web links wanted

Status
Not open for further replies.

sedawk

Programmer
Feb 5, 2002
247
US
Hi Everyone,

I want some simple sample codes that can detect web links on web pages: when the mouse moves over the link, the mouse arrow turns into hand-shape.

Any idea is appreciated.

Thanks,
 
whoa, a great resource! thank you. I will take time to read in detail.

I am a beginner in C++, so what I need is just detect a link on web page and then to see the mouse cursor change, don't need to open the link.
 
Where is the web page being displayed? (Are you writing your own browser? using a conventional browser? Be a little more specific)

Or perhaps you mean you want to create an editor for looking at the text of an HTML page and have the editor interact with the mouse?

 
Oops, I didn't realize I have so many choices. I thought this problem in a too simple way then.

The actual goal is to find downloadable links in a web page. How the web page is opened doesn't mind, i.e. IE or netscape or something from scratch. Let me raise a simple example here:

Let's say a web page called is open by IE(for example). I know on this page has some links could be downloadable. The application program can locate those downloadable links and open them automatically thus leading to download files.

My previous question was to locate those links among all the links first then try to download them. Maybe I used a wrong approach at the very beginning.

Thank you.
 
You need to write something that looks for <A HREF=&quot;target&quot;></A> or whatever in the html source code. Then you can get the target of the link and do whatever you want with it.

I'd suggest using msxml3.dll and loading the html source into the DOM (document object model)- that way you can search for the <A> tags really easily.
 
If you want to write a program to download from links, you don't really need a browser at all. What (I think) you need to do is:
1. Download the raw HTML from the site (try wininet functions)
2. Parse the HTML to find all the (file) links. Try what Pyramus said or look around for other methods of parsing HTML.
3. Go through the links (or display them to the user) and start downloading (again using wininet)

for wininet look at:
or just do a search.

You could do all this with sockets instead of wininet but it's harder and you don't really need it.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top