Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Performance Question 1

Status
Not open for further replies.

wjs2

Programmer
Jan 3, 2001
102
US
My not knowing exactly what should be considered good performance, could anyone answer this question. This is a Visual Fox 6 app.

I run an sql against a table and output a temp file, then, browse it. With all files on a local machine, it gets a little over 5,100 records per second for the screen load. On a network, all files on the server, it gets 3,560 records per second for a screen load (no other traffic on the network). This involves an indexed lookup into a small table with a record pointer to the first detail record that will be loaded into the temp file. The app then just copies the detail records to the temp file until the key changes.

I know there are a lot of variables that are not explained here. Just as general throughput, does this seem reasonable? It seems reasonable to me but am curious if I could be providing greater throughput.

Thanks
WJS
 
Little more info. Record length 195 characters.
 
WJS,
Most likely the biggest culprit here is the network archetecture, more than the type of network you are on. If I were a betting man, I'd say you are on a 10base connection. If you can make that a 100base, or gigabit ethernet, your results would likely be much higher. The other factor is the speed of the drives on your server vs. you local drives. If they are slower, (say 7,200rpm SCSI devices), and you are working locally with 10,000rps SCSI drives, you're going to take a big hit on the server.

Best Regards,
Scott

Please let me know if this has helped [hammer]
 
Hi Scott,

Thanks for the response. Much appreciated. Im not a network person but would like to learn a little, if possible, given my molecualr structure.

The first test ran was on a 7200 RPM ATA100 local drive. For the network test, I just loaded the the program on another machine, pointed it to the data on the first machine and went for it. Calling it a server was probably a mistake on my part.

The system that was operating as a server is a Windows XP Professional OS. The client in this instance was a Windows NT 4.0 Workstation (not server software) Pentium II 450 running SCSI Ultra 2's. That's probably a little backwards given the drive configuration.

Actually, this is a home network I set up to run unsophisticated tests like this. Most of my customers are small, 4 machines or less running consumer Windows OS's. Throughput has not been an issue. A new prospect already has a network of about 20 machines established. I didn't want to take a hit in performance or cause one on the existing systems.

This is about a 95% inquiry system totally in Visual Fox. Extremely small amount of updates in large files (probably about 8 million records in this instance) . Status changes basically. I guess the real question here is if there are any gotcha's I need to look for in Visual Fox that I haven't been concerned about before? It has been running on a networks before but not quite like this.

comments and suggestions appreciated.

Thanks
WJS
 
WJS,
Well, the only thing I would add then, is if you are working with files exceeding 6 Million records, keep an eye on 2 things. The number (and type of indexes you have), and the total size of any one file. I stress that, because Fox has a max file size of 2GB. That is true for any one file. (So, if you have a 500MB Customer.DBF, and a 700MB CUSTOMER.CDX, and a 1.9GB CUSTOMER.FPT, beware the FPT! If it hits/exceeds 2.0GB, you're in for a very long night.) If you have some files that are pushing over the 1.5GB size, I would build in some periodic routine that warns if the file gets larger than say, 1.8 or 1.9GB, depending on how fast it can/does fill up.
Indexes are the other thing to watch out for. They can hurt you as much (or more, if really disregarded) as help you, if you don't follow some guidlines. Especially if you have 6 million + records, unless the tables are extremely static, or if you are NOT updating indexed values, I would recommend not exceeding more than 6 Index Tags. Personally, on very large tables, I work like crazy not to exceed 2 index tags if at all possible, even taking in favor a "Smaller" index, and then doing some "Loop" structure for dealing with a specific cutomer's records. This, in reality, espeically on today's machines, is a much better performance "buyer" than just about anything else you can do to your table.
This becomes exceedingly true when adding new records. Especially if you have processes that add 1,000's of records at a time. (Which is usually how you get tables that exceed millions of records to begin with...) The big issue of performance here is, every time you "add" a new record, you are "adding" an additional Index value, re-arrainging the index to meet the requirements of the new value, and multiply that by every index you have on the table. Compound that with extensive compound index expression, and its a recepie for S-L-O-W new record processing. (You will start to see major impacts to speed at only 100,000 records if your indexes are not well thought out, and minized). Also, Maximize indexes for Rushmore Optimization. That will help a lot as well.
Beyond that, if you can ensure 100baseT or better ethernet, you should be golden.

Best Regards,
Scott

Please let me know if this has helped [hammer]
 
Scott,

Again thanks! I think you just saved me at least one very long night and several catastrophic heart failures.

From your description, I can, with some really minor adjustments, take care of the indexing problems within the app. My extreme laziness has once again proven vital to the survival of mankind. Updates that add records to the system are 98-99% after hours batch processes that reindex everything at the end of the run. (massive add/delete processes.) I can control this to some degree and give adequate warning of impending doom.

Do you know of any good books on networking? I don't necessarily want to be an expert but looking like Gandolf The Grey to a customer never hurts.

Thanks,
Bill
 
Bill,
Do you "Know" what it is that you want to know about networking? (That may sound like an odd question, but there are so many books on the topic, that you could spend half a life just figuring out what book has what you want to know...)
Surprisingly, I think there are really only a few key elements to "Know" about networking to really keep things running well. You probably "Know" more than you realize if you've been building multi-user applications as well. Any insight to what you're looking for specficially?

Best Regards,
Scott

Please let me know if this has helped [hammer]
 
Scott,

Well, I can usually follow some pretty basic instructions and have set up a network for my testing. This consists of 10/100 NIC's and a small hub. Now I know what a 10/100 NIC is and I know what an RJ45 jack looks like. If the cable is marked, I can tell the difference between a crossover and a regular cable. Other than that, you could say about anything you wanted about a network and you would be over my head. I guess what would be helpful would be something basic. Since I run all Windows, basic Windows networking I guess would be a good place to start. How to insure that I am running a 10/100 network at 100? Is it better to assign IP addresses or allow the system to assign the address? Knowing enough to not get my toes sucked by a hostile witness in a meeting would be good. I don't have a problem with the visual fox part as much as the network itself. My knowledge of this is almost nil. Just asking a pertinent question would be a real challenge.

I don't know that I have answered your question. Perhaps I have shed some light on the gaping hole I'm sitting in.

Thanks
Bill
 
Bill,
Well, you have hit a couple of points, knowingly, or unknowingly. There are really two elements in the networking you are talking about. One is &quot;IP&quot; networking, which has it's own &quot;bag-o'-triks&quot; the other is <some other> networking, like SPX protocol, or Peer-to-Peer, or NetBUI, etc. All of which, however, share in common, the wireing framework you describe.
You're probably not as far off as you might think. The easiest way to tell if you're running at 100 is to check the lights on your card and hub. (All 10/100 hubs I've seen have a light to indicate if they are running at 10 or 100 speed. Sometimes its the same light, just showing a different color). Cards are the same. Even most PCMCIA cards have an indicator in either the card, or the dongle that attaches them... so, there is a good place to start. You might find it interesting to know, I didn't read that in any book, I got it from experience. (Deffinition of Experience: Spending 100's of countless hours baninging head against desk drawer screaming &quot;Why me!&quot;)
In this case, I would recommend going to your local Borders (They usually have better technology sections than Barnes & Nobles) or some place like that, and take a peek at their network section. Look for a book that has a good well-rounded, not too technical definition of the things you (and I) have mentioned. Most of the books I have read on networking are now out-of-print, so they won't help you anyway, and I was initiated in the &quot;Netware&quot; world of Novell about 10 years ago.
Get something that looks easy to read, and just go through it. Look for a book that covers at least these topics:

TCP/IP, LAN, WindowsNT networking (When I say &quot;NT&quot;, I basically mean, NT, 2000, XP with more focus on the latter of the two, as they are just different enough from old style &quot;NT&quot; to make you crazy.)

Look for something that explains the cables, and how they are constructed. It should also cover 10base, 100base, and Fiber.

Also, there should be a good &quot;Chapter&quot; or two on protocals, though, unless you plan to become a &quot;Network Nerd&quot;, these are really not as important to a general network background.

And lastly, it should cover (only lightly) Nics, HUBS, and Routers. (The key thing to know here really is, a HUB just passes things though it, so that machines on the same network can &quot;Talk&quot; to each other. A Router is the same idea, but it allows you to connect Networks together, not machines, so you can tie two or more networks together.)

Personally, if you have a High-speed connection at home, I recommend buying a cheap router (like a Linksys), and use it to tie all the PCs on your network to the internet. It's a great project to learn a LOT about networking, without costing much, and without killing your clients. :)

Best Regards,
Scott

Please let me know if this has helped [hammer]
 
Scott,

I'm headed for the border. Now. And not because I killed a client. Your experience is appreciated here.

Thanks
Bill

 
I strongly feel that you have a network problem, not a foxpro problem. The performance should nor vary that much. I have executed many applications volume-wise similar to that which you describe (VFP and FPW) and have seen small through put differences between network and local files.
 
agordon,

I wanted to test your suggestion about the possible network problem and what I found, maybe, should not have surprised me as much as it did.

The three machines used:

Local Machine containing the data:
AMD 1.5 GHz w/256Meg of 266 ddr sdram, ATA100 @ 7200 RPM


Remote1 Machine:
Intel PII 450 MHz w/256 Meg of PC133, Ultra SCSI II @ 10,000 RPM

Remote2 Machine:
Intel P4 1.5 GHz w/512 Meg of RDRAM, SCSI Ultra 160's @ 10,000 RPM

The test:
Qry a table w/1,960,000 records extracting 674,000 records to a temp file then browse the temp file. Exe resident on each machine.

Local AMD - all local files: 132 seconds for the screen to fill.

Remote1 PII - remote qry to local temp file: 192 seconds for the sceen to fill.

Remote2 P4 - remote qry to local temp file: 130 seconds for the screen to fill.

Like I said, maybe that last test should not have suprised me as much as it did but I had to go looking for my socks and some of my teeth. It would appear to me that the real difference here is the speed and arcitecture of the p II compared to the other machines. I really thought that most of the time would be network and disk activity and the P II would hold it's own because of the SCSI drives. I have tested this PII against 800 MHz PIII's running IDE drives and trounced them in heavy database usage... This is the first time I checked it against Intel or AMD in this class of machine. Duh.... I guess the throughput answer is, get a gonzo machine. Excuse me. Get a NEW gonzo machine. Now there is a novel idea!

 
One more test. Just had to find out how this would work on an older machine. Pentium 133, same application as above. 485 records per seconds. All local files.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top