Hi there,
I have a Perl script processing a lot of files in a directory.
If the files are processed in sequence, it would take much time. So one idea is to create several child processes, and each of them will process a part of the files.
My question is: Is there any easy way to group the files (their sizes can differ very much, and their format is binary), so that each child process gets similar data amount to process? This could then result in a maximum system workload and data throughput.
Thanks!
Mack
I have a Perl script processing a lot of files in a directory.
If the files are processed in sequence, it would take much time. So one idea is to create several child processes, and each of them will process a part of the files.
My question is: Is there any easy way to group the files (their sizes can differ very much, and their format is binary), so that each child process gets similar data amount to process? This could then result in a maximum system workload and data throughput.
Thanks!
Mack