Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

performance tuning on the PC || better throughput using threads on multi processor systems ?? 2

Status
Not open for further replies.

Crox

Programmer
Apr 3, 2000
892
NL
Hi,

I was asked by a tester if it was possible to get a higher throughput on a multi-processor system. In a single thread on his system, the cpu% was never higher than 30%. He says multi thread programs are able to get near 100%. What can I do to create multi-threading programs?
Also if I can split a task into more than one, is it possible to communicatie between threads? If yes, how? Do I need some assembly or is there as system call for?

Also how can you enhance I-O in 32 bits environment? In the past a bigger buffer was better. Is that still true?

Microsoft systems seems to react very different from good 'old' IBM mainframe environment, where the slowest device gets the highest priority. On Microsoft I experienced big trouble if you do what helps on IBM, but perhaps things changed in newer releases?

I am running a CPU bound DOS task on Vista, sometimes Win8. The tester has a kind of multi processing system with Win8. How is it possible to speed up the task, creating a faster throughput?

Thanks for any tips!
 
Multithread - the answer is going to depend a lot on the COBOL vendor and Version you use. And it won't be straightforward if it is possible.

Regarding I-O it all depends on the application and on what it is doing at those points where performance would be important. Bigger buffer for IO if you only need a single record of the buffer read won't do much good and can be worst.


for info current company I work with, has Microfocus Cobol .Net, and we max the production server (36 cores) when running the night batch so its perfectly possible.

Regards

Frederico Fonseca
SysSoft Integrated Ltd

FAQ219-2884
FAQ181-2886
 
Sounds impressive.
I run an encryption system that is very CPU intensive. I can tune buffersize and try to optimize instructions although this has be done in the past and improved the speed 64 times. There is a new environment that runs 32 bits instead of 16 in the past. I try to make the tester happy. :)

Thx!
 
What can I do to create multi-threading programs?
Also if I can split a task into more than one, is it possible to communicatie between threads? If yes, how? Do I need some assembly or is there as system call for?

Generally what you do is trigger the appropriate thread creation calls in the API. Most compilers that intend to support multi-threading will have wrappers to do it. However, there's a number of problems you run into with multi-threading programs. The problem is that you have to be able to split the job up into multiple tasks which can be done concurrently for a decent amount of time (spawning threads is expensive) if you're wanting to get a performance benefit. There are scant few algorithms which allow this - most all of the common COBOL algorithms will NOT let you do this. Inter-communication between threads happens with messages and events, both set up through the API calls.

If you have multiple programs that don't rely on the same resources, you can always run them concurrently, but this is more of an OS question than a programming question.

Also how can you enhance I-O in 32 bits environment? In the past a bigger buffer was better. Is that still true?

This wasn't necessarily ever true on Microsoft operating systems. All of them function best with a 64K-128K buffer - generally tied to the sector size of the media. Going much bigger either doesn't present a performance improvement or actually causes a performance problem - performance versus buffer size graphs out to be a parabola.

Microsoft systems seems to react very different from good 'old' IBM mainframe environment, where the slowest device gets the highest priority. On Microsoft I experienced big trouble if you do what helps on IBM, but perhaps things changed in newer releases?

Programs get priority in a Microsoft system normally when they have focus. This generally means a UI that's in the foreground receiving input and putting output to the screen. But you can adjust priority of processes and threads either through the OS or through the programming APIs.

I am running a CPU bound DOS task on Vista, sometimes Win8. The tester has a kind of multi processing system with Win8. How is it possible to speed up the task, creating a faster throughput? 1

Is this a pure DOS task or a Win32 command line app? This matters incredibly. With this question you ask, the only answer given the details you've offered is "it depends on what you're doing".

 
I consider this to be a pure DOS task.

In the past increasing the I-O buffer to 63k, the job ran 4 x faster. Now I can create much bigger buffers. What is the optimum? Perhaps create a kind of install program to find the right buffer size?

To test the encryption and the generated keys, I always encrypt 100Mb of space. After that, I try to compress it using Rar, Zip, etc. just to proof that the keys are good. The keys are only accepted after finding out that compression is not possible. I would like to compare this test with other encryption programs. Are there suggestions?

Also in the USA there are laws against too strong encryption. In France also. In The Netherlands we are not limited. There is a guarantee of that your letters are secret. Perhaps anyone can explain me what the legal impact is of releasing an uncrackable encryption. The variations of this program are 10^5200. If anyone is doing something wrong with this program, I can not help them. Nobody can I am afraid.

Has anyone an idea of what an acceptable performance is? I run now on an old machine, handling about 3Mb per second. I am still tuning a bit, but what would be a good performance?

Thanks very much for your thoughts!
 
from what you say there it seems that your process is as follows - correct me if I am wrong please.

1 - read input
2 - generate keys
3 - compress
4 - repeat 2 and 3 until no compression is possible

that being the case how long does each one of the steps take for files of varying size, including some above 5 GB if that would be something you would do.

And on step 3 can you supply the command line used by each utility you use (although I would recommend using 7zip in most cases)

Regards

Frederico Fonseca
SysSoft Integrated Ltd

FAQ219-2884
FAQ181-2886
 
You have the right idea. Generating the key is only a one time happening for most of the customers. The validation test also, unless it should not be right, but that almost never happens. It is just to be sure for every customer or administrator at the customers site to test it with 100Mb space because if that is not compressable, the compressing algorithms fail to discover any shortcut so than it should be ok.

The usual and normal run is encrypting/decrypting any kind of file, for example emails, word, movies, pictures, archives, etc. The general idea of the sales is to provide an API interface so an other party can use it any way they want.

My idea is that the product can be used to communicate with as many other users as needed and even every message can have its unique encryption/decription setting and password. It is possible to avoid any repeated key so a 'listener' can not find any clue. Of course 10^5200 variations guarantees to my feeling that it is extremely difficult to find out anything at all.

In the past I used the crypt torture test found at Compuserve, but that didn't feel like a strong test. Also the files were very small. The compressing method was interesting because many encryption programs at that time didn't make the 100Mb space un-compressible. The encrypted space was reduced almost always into something like 20% or even less which is an indication of not so good encryption, at least, that is what I think. If anyone has a bigger challence, I would like to hear how to do that.

I assume you all are able to create 100Mb space but if not, I can send you a compressed version of such a file, which is very, very small but also then it is interesting to see how the different ways of compressing are giving very different results. In fact the source of that program is very small :)
 
so back to my original question - COBOL vendor and version - without that and if solution is to be plain COBOL we can't say if it is possible, how to do it, and workarounds if required.


Regards

Frederico Fonseca
SysSoft Integrated Ltd

FAQ219-2884
FAQ181-2886
 
I consider this to be a pure DOS task.

Whether you "consider" something DOS or not is irrelevant as to what the app actually is (which is what I asked). Win32 command-line programs are far more preferable under Windows than DOS for exactly the concerns you are asking. But then the question of COBOL version and vendor will answer this, too.

In the past increasing the I-O buffer to 63k, the job ran 4 x faster. Now I can create much bigger buffers. What is the optimum? Perhaps create a kind of install program to find the right buffer size?

There's no point to any of this on a Windows OS, which is what I was trying to say above. In my testing I mentioned above, the speed became optimal at the 64K point, stayed steady up until about 1MB, and then got slower. While I could use 1MB buffers, there's no performance benefit to doing so and only amounts to taking more memory than the app needs. So 64K it is in all my stuff.

I would like to compare this test with other encryption programs. Are there suggestions?

Have you played with Microsoft's encryption APIs?

Has anyone an idea of what an acceptable performance is? I run now on an old machine, handling about 3Mb per second. I am still tuning a bit, but what would be a good performance?

Depends on the machine and the opcodes available that you can use in your app. I don't know what kind of data you're working with or what you're doing with it. All I can say is that for 7zip, I'm getting about 3.65MB/s on my system.

As for your process (again just a guess, as I don't know if these are separate programs or not), I'd try spawning multiple copies of 2 and 3 equal to the number of extra cores in the system (if you can get away with it of course).

 
I am able to split the program into equal parts but it will also give some overhead.

Thanks for al your ideas!
 
perhaps it is a good idea to use the COBVSC program again to compare some compilers.
I am curious about how good the compiler with GNU license works.
Are there other cobol sources available to do a compiler compare?
I remember the NIST sources, but that was not much about performance.

Perhaps you remember the old threat
 
Crox said:
I remember the NIST sources, but that was not much about performance.

Actually, the NIST tests have nothing to do with measuring performance in terms of time or memory use. They are compliance tests, testing against the ISO COBOL specification. IIRC some are compile-only, or nearly compile-only (printing something at execution time to fit into an automated framework).

Tom Morrison
Hill Country Software
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top