I have a bit of a problem - I'm wondering if there is a handy way to access a 5 dimensional array with about 1.57 BILLION double percision numbers! That amounts to 12GB of data which through symetry and some optimization we've trimmed down to about 3GB. The array holds values that later must be summed in a fairly random fashion (quantum mechanical perterbation theory is the application here). At the moment, the array will take about a day to calculate and populate in a standalone application (although it has been parellelized and configured for distributed computing quite nicely). Anyway, the goal is to not have to use 64 bit machines since they are so expensive but with a 3GB array, 32 bit machines are being pushed to the limit.
On to my actual question, please advise on the viabilty of this .
I'm going to try and spin off 10 threads (maybe an additional one for control) that will serve to each hold 1/10th of the data. The memory required per thread then is a more managable 310MB apiece. In essence, i'll do something like thread4.storeYterm(2,4,39,200) and thread4.getYterm(2,4,39,200). Is this the best way to do it (rather then one huge global array). The machine that does the final "assembling" (calculation) *will* have 4GB of RAM so they're shouldn't be any VM thrashing.
Thanks,
Matchwood
On to my actual question, please advise on the viabilty of this .
I'm going to try and spin off 10 threads (maybe an additional one for control) that will serve to each hold 1/10th of the data. The memory required per thread then is a more managable 310MB apiece. In essence, i'll do something like thread4.storeYterm(2,4,39,200) and thread4.getYterm(2,4,39,200). Is this the best way to do it (rather then one huge global array). The machine that does the final "assembling" (calculation) *will* have 4GB of RAM so they're shouldn't be any VM thrashing.
Thanks,
Matchwood