Hi,
I'm attempting to improve an existing application that makes excessive calls to the hard drive by buffering large amounts (>20MB) of data in RAM, rather than having to make so many read/writes. The app is written in C, and I am using the standard malloc(_) call to dynamically pull memory. Problem is, if I ask the box for more than exactly 3888 bytes (3.888MB), malloc(_) fails and I get a core dump.
I've read some docs on UNIX memory management, so I don't think it's a problem with the UNIX side of the OS. My per-app memory allowance is sufficient for my needs, and I've had the same problem running the app on a live box with a number of competing users and a development box where mine was the only demand on resources. The dev box is mirrored from the live box, though, so my hunch is that there is something in the HP_UX OS that is restricting my malloc(_). We're running OS 10.20 right now.
Thanks for reading!
I'm attempting to improve an existing application that makes excessive calls to the hard drive by buffering large amounts (>20MB) of data in RAM, rather than having to make so many read/writes. The app is written in C, and I am using the standard malloc(_) call to dynamically pull memory. Problem is, if I ask the box for more than exactly 3888 bytes (3.888MB), malloc(_) fails and I get a core dump.
I've read some docs on UNIX memory management, so I don't think it's a problem with the UNIX side of the OS. My per-app memory allowance is sufficient for my needs, and I've had the same problem running the app on a live box with a number of competing users and a development box where mine was the only demand on resources. The dev box is mirrored from the live box, though, so my hunch is that there is something in the HP_UX OS that is restricting my malloc(_). We're running OS 10.20 right now.
Thanks for reading!