Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Infinite Loop and Cpu Utilisation 1

Status
Not open for further replies.
Mar 13, 2002
34
0
0
MY
I write a program that which consist of an infinite loop.
Within the loop are codes that suppose to look for new file
that is just being uploaded in a folder. When it detect a new file arrives, it would open the file, read it and do some processing.

It is critical for the file to be quickly loaded into my database when it arrives. Hence I didn't use sleep() in the loop.

With that scenario, when I use the top command (HP-UX), I discovered that my perl program was the highest cpu utilization process 95.69 %. Is there anyway that I could lower down the cpu utilization without using sleep() but at the same time ensure that my program can still be very responsive to the arrival of new file in the folder ?

Thanks in advance.
 
It can probably be done without sleep, but I don't know how to do it. Is it really going to hurt your application that much to sleep for one second between loop iterations? I'd try putting a sleep 1 in your loop, and see what that does to CPU utilization. Hardy Merrill
 
Tried to figure the basics......
I suppose what exactly you are figuring out is an interesting topic of process management in Unix (and NT). In a multitasking environment, the processes are run in a time-sharing environment where the OS would try to divide the CPU time among the processes competing for the resources predetermined by some process scheduling algorithm. A process would be put to a waiting mode when events such as waiting for I/O or the process delibrately calling sleep() occurs.

The problem that you described can be demonstrated given this simplified code :

#!/opt/perl5/bin/perl

while(1)
{
}

Using top would display the process as the highest cpu-utilization process. With the state showing running most of the time (as compared to other user processes). You could put a sleep(1) in the code and the cpu-utilsation of the process would drop very significantly (more than 90% drop) with the process showing a sleep state most of the time (the top program show a snapshot of the process characteristics which is taken every few seconds).
Another experiment were to remove the sleep(1) from the code and try to run the program say 10 times. Top would show all these 10 processes as the highest cpu-utilsation processes. But the %cpu number would be divided among the 10 processes by which each of them would have < 50% both weighted and raw CPU percentage.

Perhaps you could think this way, since your application process is running on a less busy server and it is an infinite loop, it turn out to be the highest cpu-utilisation process. If the server get busy by having more cpu-intensive process, then the Unix process sheduler would act accordingly to its scheduling algorithm to control the process execution which involved those cpu-intensive processes. Hence the way we write code would determine the time the process would be given opportunity to run. In the infinite loop program that was discussed earlier, it is simply a loop without any i/o waiting phase for instance. Even by reading the code, we could see that it really mean that we want the program to run ALL THE TIME infinitely. So what do we get ? The cpu suppose to execute the process all the time but that is not the way it should be.In a multi- process environment, the OS would execute the processes based on its scheduling algorithm. CPU is just a competing resource among the processes where the decider would be the OS process scheduler based on its scheduling algorithm.

Hope that offers some thought to your problem.

Thanks and Best Regards
Shehrus
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top