Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Perl/Tk adding iand getting info from big file

Status
Not open for further replies.

InDenial

Technical User
Aug 1, 2003
191
NL
Hi everyone,

I am writing a script using Perl/Tk wich has as a goal to register information and with that information to be able to write switch configs.

This script has a couple of parts:

The switch part. On this part you can add, find and change switch information. I have a file wich contains the following information:

switchname, ip, subnet, gateway, and, some, more , options The file now contains about 2400 rows. I suspect that this will grow to about 10000 max.

Like I said.. there are three things that can be done with this file: (add, change and find). To describe my problem I only describe the adding of a switch here.

this means I have a dialog where someone can enter the switch and ip information and the rest. To check if the entered information already excists or to get the needed information I have 3 options:

[ol]
[li]do a grep on the file. If the grep returns true. it excists if not then add the information to the file.[/li]
[li]import the whole file and put it in an array. Itterate through the array and if it excists do nothing if not add it to the file and clear the array.[/li]
[li]import the whole file and put it in a hash. Check if it excists. if not add the information to the hash.[/li]
[/ol]
My questions are:

Which of the three options is best.
[ol]
[li]I need to do a grep on the file everytime. And when the file gets bigger the grep will take longer. How long will it take to do a grep on a 10000 lines long file?[/li]
[li]This is not doable and has a big strain I believe on the script. Everytime I want to add or query I need to open the file read it in and put it in an array and itterate through it. [/li]
[li]I would put everything in a hash and have the data ready all the time. but still when something is added I have to open and add to the file AND at to the hash. Is it smart to have such a big file in memory all the time?[/li]
[/ol]
It was a long post. If none of the options above are any good. I will have to rewrite my script to split up the files based on some predefined criteria by the user. I do not have the ability to use MySql or SQL or DBI.

thanks in advance.

InDenial

 
10,000 lines doesn't describe the file very well, but it sounds insignificant to me. Maybe if it were 1,000,000 lines I would start to worry, but 10,000 is nothing really (assuming the lines aren't too long).
 
Thanks for the quick reply.

The lines will not be longer than 100 letters and numbers. I think (also from reading other posts) that I underestimated Perl. I have seen people wanting to push files the size of a Gig into a script. My file is peanuts compared to that.

I have been reading through some other posts concerning files and I came accross Tie::File. Would this be a good thing to use for keeping the array or hash the same as the file? and would you recommend a hash or an array?

Thanks..

InDenial

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top