This is a really basic question so for those of you with patience I appreciate any input.<br><br>I have written a kind of primitive search engine for urls with three letter domain names. <br>It runs a loop which creates all possible alphabetic combinations of three letters and calls a URL with that name then parses out the meta-keywords from the HTML (if they exist). It then puts the keywords in a database along with the URL.<br>This database I can then query from an asp page. And I get all the entries containing the keywords I enter. <br><br>It works fine and is chugging along on my dos prompt. However, although I've hammered out all the exceptions so that the program is running, some URLs are slow and it tends to get stuck, sometimes for over a minute waiting for a response before going to the next URL.<br><br>My question is, how do I create a timer so that the attempted read to the URL skips to the next one after say 15 seconds. I've used simple threading for applets but I'm not sure how I would do this for something like this.<br><br>Here's where the code where this should happen:<br><br><br>...<br>try<br> {<br> <font color=red>URL u = new URL(str);<br> InputStream input = u.openStream();<br> InputStreamReader reader = new InputStreamReader(input);<br> BufferedReader buffreader = new BufferedReader(reader);<br> while ((str=buffreader.readLine()) != null)<br> {<br> this.HTML += str;<br> if (linecounter>5000) break;<br> linecounter++;<br> }</font><br> } <br>catch(UnknownHostException e)<br> {String unknownhosterror = str+": UNKNOWN HOST EXECPTION: * * * S K I P P I N G U R L * * *";} <br>catch(MalformedURLException e) {System.err.println(e);}<br>catch(IOException e) {System.err.println(e);}<br>...<br><br><br>I'm not sure if the delay is occuring when it opens the input stream or when it actually reads the lines. Either way any help appreciated. <p>--Will Duty<br><a href=mailto:wduty@radicalfringe.com>wduty@radicalfringe.com</a><br><a href= > </a><br>