Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

initialising REAL variables (to be equal to zero)

Status
Not open for further replies.

billgray1234

Programmer
Mar 14, 2011
39
all REAL variables need to be given a value (i.e. initialised) if they are to be either 1) written to the screen, or 2) written to a file, or 3) used in calculations. for example, a common way to initialise such variables is to set them equal to zero.


such variables include "CUMULATIVE SUM Variables (CSV's)", and "DIRECT CALCULATION SUM Variables (DCSV's)". an example of each (as applied to a scalar variable) is given below.


CUMULATIVE SUM Variables
------------------------

SUM = 0.0
DO I = 1 , N
SUM = SUM + REAL(I) + 3.0
END DO

the important line is "SUM = 0.0" (i.e. it initialises SUM to be equal to zero)


DIRECT CALCULATION SUM Variables
--------------------------------

DO I = 1 , N
SUM = REAL(I) + 3.0
END DO

note that the line "SUM = 0.0" is not included here

my question is:- like CSV's, do DCSV's also always need to be set equal to zero (i.e. initialised) first, before they are manipulated ? i.e. is the line "SUM = 0.0" also needed (before the DO loop) in the "DIRECT CALCULATION SUM variables" example above ? or is the "DIRECT CALCULATION SUM variables" example above correct "as it is" ?



another example (as applied to a matrix variable) is as follows.
say i have a matrix A. the matrix is, say, 1000 x 1000, and is "partitioned" into the equal-size blocks A1, A2, A3 and A4, such that A1 = upper left block, ... , A4 = lower right block. say i only ever manipulate block A1 in my calculations. also, i only ever want to write block A1 (and not any of the other blocks) to either the screen or a file. my questions are:-

1) do i need to initialise (i.e. set equal to zero) the ENTIRE CONTENT of matrix A (i.e. ALL blocks) during my calculations ? or do i only need to initialise block A1 ?

2) do i need to initialise (i.e. set equal to zero) block A1 if it is only ever used as a DIRECT CALCULATION SUM variable ? or only if it is used as a CUMULATIVE SUM variable ? or both ?


in summary, my questions are saying :- it helps to initialise (i.e. set equal to zero) variables. but, is it always NECESSARY ?
 
There is only one rule to remember :
it is forbidden to USE a variable which has not been initialized.

For instance :

Code:
DO I = 1 , N
   SUM = SUM + REAL(I) + 3.0
END DO

as you use SUM when I=1 (it appears in the right side of an assignment), then SUM must have been initialized before the loop.

Code:
DO I = 1 , N
   SUM = REAL(I) + 3.0
END DO

Here on the contrary, as you never use SUM (you just define it) in the loop, there is no need to initialize it before the loop.


François Jacq
 
Billgray,

the rule is very simple:

when you want to use a variable (write it anywhere or use it in calculations) your computer should know the value of your variable. So before using it you have to set it to a value by either
- reading data to it from file or userinput
- evaluate it
- initialize it

Variables or array elements that you do not use - why do they exist then anyway ? - don't need to be initialised.

But there is no sense in doing long musings whether you have to initialise or not, for initialising is fairly simple. So when you are in doubt if you should initialise, then do the initialising. Include initialising in your type declaration like

real, dimension (1000, 1000) :: rValue = 0.0

and that's it. This initialises the array rValue at compilation time, that is each element or rValue is 0.0 when your program starts execution. If you use rValue in a subroutine or function that is executed more than once for each run of your program and you need your array zeroed every time this routine is started, then include the statement

rValue = 0.0

before you use rValue and you are done.

Norbert






The optimist believes we live in the best of all possible worlds - the pessimist fears this might be true.
 
thanks for your replies !

i agree with you -- if it doesn't cost much to initialise a whole matrix, then why not just do it ?

well, there can be some situations where it might be costly (time-wise) to do so. i'll give an example below -- it also might help explain your question "if variables (array elements) are unnecessary, then why do they exist ?".

for anyone who is familiar with "finite element analysis", an important step is solving the system of equations F=KU for U, where F and U are column vectors, and K is the "system stiffness matrix" (a square matrix). note that K=K(X), where X is a stiffness factor. for each sample value of X, K first needs to be initialised (i.e. K=0). then, it is manipulated, as follows. the "system" is comprised of one or more "finite elements". basically, an "element stiffness matrix" Ke=Ke(X) is computed for each finite element. then, all Ke's are added to (assembled into) the system stiffness matrix K, as appropriate. then, after F is computed in a similar fashion, the system is solved, for U.

now, say i have a "general-purpose" program, which ALLOWS for up to 10,000 finite elements -- the maximum allowable size of K could be, say, of the order of (50,000 x 50,000). but, i might then want to apply this program to a very small "specific" example, which might consist of only 100 finite elements -- it might require an example-specific K matrix of the order of only (500 x 500) -- i.e. only a small block of the maximum allowable K. it follows that :-

1) the majority of entries in the maximum allowable K (i.e. 50,000 x 50,000) will not be needed / will not need to be initialised.

2) the difference in time, between initialising the maximum allowable K (i.e. 50,000 x 50,000) and initialising only the example-specific K (i.e. 500 x 500) could be significant. and, again, K must be initialised for EVERY sample value of X (and there could be MANY sample values)

anyway, that's basically why i asked MY questions. and, i hope that answers YOUR question(s) -- let me know if it doesn't !
 
I am a little familiar with FEAs, so I understand your problem. But this is a problem that is fairly common with other tasks as well. Namely that the amount of memory, that is the number of data you want to host in memory is not constant.

The solution for this kind of problem is dynamic memory allocation. This would look some like follows:

Code:
..
..
(type declarations)
..
..
real, allocatable, dimension (:,:,:), save :: K
..
..
j1Max = ......  ! compute the number of data as needed for this special run
j2Max = ......
j3Max = ......
..
..
allocate (K (j1Max, j2Max, j3Max), stat = I)
if (i .ne. 0) then
!  issue errormessage for apparently there is not enough memory available
endif
..
..
(use K)
..
..
deallocate (K)   ! clear memory when you no longer need K
..
! End of program

This the standard approach when you do not know the amount of memory your program will ever need, but while executing your prog can evaluate the number of data for the actual problem.



The optimist believes we live in the best of all possible worlds - the pessimist fears this might be true.
 
billgray:

If you are serious about time and FEA, you'd better start looking at sparse techniques, too.

On another thread, somebody was attempting to initialize a matrix without realizing that he was asking for terabytes!...it's just not going to happen.

In FEA, I presume you are going to have a lot of zeroes in your matrix...

...and I am not talking about the right-top and bottom blocks of your statically over-dimensioned matrix nor about your dynamically dimensioned (Nelements X Nelements) matrix...

...for as long as you declare a square matrix, you will typically have more zeroes than non-zeroes, correct? Simply because an element can have so many neighbores but there is no limit on the number of elements and so the non-existing coupling between the i-th and j-th elements yields K(i,j)=0

...sparse techniques will only use as many matrix elements as necessary without defining the ones in between...a great space and time saver...



 
Salgerman,

this time it was me to hold back on the memory usage involved :)
and wait till we discuss the errormessages. (And 10GB in 32 bit sounds really reasonable compared to the other thread; would need a 64 bit environment to handle this though).

This is a really good example of what dynmic allocation is for: If billgray would dimension his variables for the maximum problem his prog should be able to handle, then the prog might be too big for many a platform but will be a waste of resources most of the time, when it deals with smaller problems.



The optimist believes we live in the best of all possible worlds - the pessimist fears this might be true.
 
gummibaer:

You're probably right, I might have gotten ahead of myself...predisposed by that other thread!

I just saw FEA and knowing the size of our FEA models here at the office...I know it will eventually get huge; of course, this is at the commercial level with commercial FEA programs. To start, billgray probaly has a 'toy' program or one that solves a very specific problem/geometry that will not get any larger...

cheers
 
thanks for your replies !

in answer to your questions :-

firstly, about the program i'm writing. it IS a "general-purpose" program. it allows for many finite elements. but, not THAT many, lol ! no, the numbers i gave in the example above (i.e. 10,000 finite elements, and a 50,000 x 50,000 K matrix) were VERY EXAGGERATED ! i only gave them in order to demonstrate a POSSIBLE problem size (and, hence, the reasoning behind my original questions). my program allows for a mere 200 (or so) finite elements, and a 500 x 500 (or so) K matrix. and yes, i agree with you -- a (50,000 x 50,000) K matrix might require more memory than a "regular" computer can provide. hence, the reason why i'm using more "reasonable" maximum dimension sizes.


ALLOCATABLE ARRAYS / MATRICES (dynamic memory usage) -- yes, i'm using them, for the reasons you've suggested. but, due to the complicated nature of my program, it's not easy for me to ONLY use allocatable arrays. so, i'm instead using them WHEREVER POSSIBLE -- and using fixed-size arrays (with practical allowable maximum dimensions) ELSEWHERE. but, again, i agree with your suggestion of using allocatable arrays -- fully utilising the benefits of fortran 90/95.


SPARSE MATRIX TECHNIQUES -- ditto. i'm using them, for the reasons you've suggested.


finally, if you can think of any other tips/suggestions about this line of programming, then definitely send them through. even though i am reasonably familiar (experienced) with fortran 90/95 and FEA, i'm still learning new things every day !

thanks again for your replies !
 

Well...since you asked for more suggestions for your FEA program...are you using Parallel Programming?

I have never used it, but I thought I throw your way :)
 
yes, i 've heard of parallel programming, and i AM / WAS interested in learning it -- i even went so far as to obtain every resource i could find on it (in particular, parallel programming using OpenMP). but, unfortunately, with the current project i'm working on, it turns out that i now have less time than i was originally assigned (long story). so, now i DON'T have enough time to learn parallel programming. but, thanks for mentioning it to me, anyway.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top