Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Double precision

Status
Not open for further replies.

MrF95

Technical User
Jan 17, 2010
10
IT
Sorry, but I'm a newbie with Fortran.... so I don't get it why the following program continues to evaluate x as a single precision variable? Why are there non zero decimals?

Mark.

-----------------------------------------
program precision_test
implicit none
integer, parameter :: sgl = selected_real_kind(p=6)
integer, parameter :: dbl = selected_real_kind(p=15)
real (kind=dbl) :: x
x = dble(3.140000000000000)
write(*,*)'Single precision kind=', sgl
write(*,*)'Double precision kind=', dbl
write(*,*) 'precision of x=', precision(x)
write(*,*) 'x=',dble(x)
end program precision_test
------------------------------------------

The Output I get is:

Single precision kind=4
Double precision kind=8
Precision of x=15
x=3.1400001049041748
 
Hi
Just try:

!x = dble(3.140000000000000)
x = 3.14_dbl
 
Oh, yes... thanks. :)
BTW, would triple precision be possible?
 
Depends on your compiler. Some compilers have what they call quad precision.

Check if your compiler accepts any of the following

real(kind=3) - if real(kind=2) = DP, use this option
real(kind=10)
real(kind=12) - normally 32 bit
real(kind=16) - normally 64 bit
 
I don't know whether I'm right, but I'm quite sure that the need for double precision can always be avoided if you normalise you problem properly.
 
Yes, that's what I have heard too! Does anybody know how to do that? Any reference?

Meanwhile, here is my problem...

I have to integrate the orbit of a comet around a star which itself orbits in the Galaxy. How big is the Galaxy? Huge... it depends. For example lets take the sun's orbit as reference dimension, which is about 8000 parsec. A parsec is 3*10E16 meters, therefore 8kpc=2.5*10E20 meters. Therefore, in double precision (15 digits) one has a truncation error of about 2.5*10E5 meters, i.e. about 250 km at each integration time step!! This might seem nothing compared to the Galaxy as a whole, but it is a horrendous error with reference to the sun because it propagates during thousands of cycles when calculating the orbit of the comet and may end in a completely wrong curve. Why not using a sun-centered reference system? Because the final aim (a project in years to come...) is to be able to integrate the particles orbits not only around a single star, but in stellar clusters interacting among each others and moving throughout the Galaxy.

So I got stuck with this.... has someone an idea?
 
XWB: My compiler accepts kind=10, but it remains double precision.
 
I'm not an expert on the matter, but I think you have to use the gravity field of the closests most influential body as the centre and the rest, those bodies that happen to be far away, as perturbations. There must exist a definition of a sphere size of gravitational influence for each body.

So in your case, you take the star, around which the comet is moving, as the centre and you consider the rest of the galaxy as a perturbation on this gravity field.

As from the point of view from the galaxy the star and the comet are aproximately equally far away, the relative acceleration between comet and sun caused by the galaxy are aproximately equal (and small).
 
Well.. I maybe forced to resort to a single star-centered system in a way or another. But, as I said, that would prevent me to generalize the simulation to comet orbits moving in star clusters (not only around a single star), i.e. undergoing the gravitational pull in multiple star systems, which was after all the final aim.

BTW, doesn't every Intel processor compute internally always fp numbers in 80-bit format regardless of the precision choosen? What are the compilers which fully exploit it? Looked up for Intel Visual Fortran but could not find where it declares precision and range. Has anyone an idea?
 
I don't know about that.

I know that if you use double or single precision you still don't know how many bytes you're using in your variables, because it's either compiler or computer dependent (don't rememeber).

If you want to be sure about the amount of bytes in every variable you can use the SELECTED_KIND command:
Code:
INTEGER, PARAMETER :: I1B=SELECTED_INT_KIND(2)
INTEGER, PARAMETER :: I2B=SELECTED_INT_KIND(4)
INTEGER, PARAMETER :: I4B=SELECTED_INT_KIND(9)
INTEGER, PARAMETER :: I8B=SELECTED_INT_KIND(18)
INTEGER, PARAMETER :: R1B=SELECTED_REAL_KIND(r=2)
INTEGER, PARAMETER :: R2B=SELECTED_REAL_KIND(r=4)
INTEGER, PARAMETER :: R4B=SELECTED_REAL_KIND(r=9)
INTEGER, PARAMETER :: R8B=SELECTED_REAL_KIND(r=18)
INTEGER(KIND=I4B) :: MyFourByteInteger
REAL(KIND=R8B) :: MyEightByteReal

If you do like this "MyEightByteReal" will ALWAYS be eight bytes, independent of the compiler or the hardware you're using
 
kind=8 is 64 bits
kind=10, on some compilers, is 80 bits

You might find increased precision with kind=10 but it depends on how many decimal places you are printing.
 
xwb, with kind=10 on GNU Fortran compiler I don't get more than 18 digits precision. I can see that subtracing two almost equal numbers. Don't have Visual Fortran, but I assume 80-bit should have more (it depends on how the mantissa and range are distributed internally of course). But I would be surprised that Intel produces a processor and then offers a compiler which does not fully exploit it?!? That would be strange. I'm not willing to buy something before knowing for sure that it will do the job.
 
g95 supports kind=16 which is supposed to be 34s.f. You could give that a try.
 
Hmmm... for "real (kind=16) :: var" it tells me: "Kind 16 not supported for type Real". :-(
 
Sorry - that is only for a 64-bit OS. On 32bits, the max is kind=10 which has a precision of 18 DP.
 
Yes, indeed. But benchmarks show that quad precision can be 10 to 100 times slower than double! This, becaue it is done via internal software, it is not hardwired on the cpu. So, that would lead to unacceptable execution times. I would have been happy with "only" 23 s.f., and really wonder why this can't be done on an 80-bit processor?!? I'm afraid I will have to give up and be content with simulating a more conservative system... :-(
 
How about shifting the point of reference so that you don't need so much accuracy. For instance, if you are working around the Gulf of Guinea then localize calculations around that area instead of taking geocentric coordinates. That way, you get more accuracy.

Most games go from room to room or over small areas- each room runs a localized coordinate system. You could do that on the simulations.
 
Yes, but multiple systems of reference (SR) which contain a subset of commonly shared particles simulated at once are needed here since clusters of stars have to be considered, and stars migrate. See e.g. and take a look at the mpg video linked at the end of the article which clarifies very nicely the problem. Imagine you want to simualte the orbits of comets whirling around say 10 stars, which first interact mutually, exchange the comets among themselves, and then migrate throughout the Galaxy. One can not change SR but should consider 10 SR at once taking also in account which particles are shared and which not. In "galactocentric" coordinates it is easy, almost immediate to do that, but otherwise its an awful complication, and all this only because 4-5 s.f. are missing....[sadeyes]
 
Do you have a machine that is capable of running a 64-bit OS (even as a virtual machine). You could try the 64 bit compilers - they do 16 byte reals.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top