Results 1 to 3 of 3

Thread: Math constant definitions

  1. #1

    Math constant definitions

    Hello all, I was going through the cl_platform.h header(s) on my system, and I was surprised to find that the definition of PI (CL_M_PI and CL_M_PI_F) were apparently in disagreement:
    Code :
    #define  CL_M_PI            3.141592653589793115998
    #define  CL_M_PI_F          3.14159274101257f

    Moreover, these values are different from the ones that are found e.g. in the GNU C standard library headers, where for example we have
    Code :
    # define M_PI		3.14159265358979323846

    It is my understanding (and the findings e.g. here seem to agree) that at the precision of double and floats the definitions are essentially equivalent. However, I believe it is a little confusing to see those values, especially when someone is familiar with the ones typically used instead.

    So my questions are:

    1) why the choice?
    2) would it be possible to amend the header files with either the 'correct' values or a comment on why they are not used?

  2. #2
    Senior Member
    Join Date
    May 2010
    Location
    Toronto, Canada
    Posts
    845

    Re: Math constant definitions

    As far as I can tell, those decimal representations map to the same double-precision floating point value. Both are equally correct.

    The decimal representation of CL_M_PI was probably obtained by computing the double-precision floating point value that is closest to the true value of pi and then transformed it back to decimal. On the other hand, glibc's M_PI decimal representation appears to come directly from the decimal representation of pi. Both will translate to the same double-precision floating point value, so it doesn't matter.

    Does it make sense?
    Disclaimer: Employee of Qualcomm Canada. Any opinions expressed here are personal and do not necessarily reflect the views of my employer. LinkedIn profile.

  3. #3

    Re: Math constant definitions

    Quote Originally Posted by david.garcia
    As far as I can tell, those decimal representations map to the same double-precision floating point value. Both are equally correct.

    The decimal representation of CL_M_PI was probably obtained by computing the double-precision floating point value that is closest to the true value of pi and then transformed it back to decimal. On the other hand, glibc's M_PI decimal representation appears to come directly from the decimal representation of pi. Both will translate to the same double-precision floating point value, so it doesn't matter.

    Does it make sense?
    Thank you for your reply. Indeed, at the precision at which IEEE floats and doubles operate the decimal representation given and the one closer to the actual value of pi in arbitrary precision result in the same exact value. And indeed it doesn't matter, since the actual bit value compiled in will be exactly the same.

    However, I do believe that using the binary to decimal transformation of the approximation rather than the truncation of the decimal representation can be somewhat confusing when reading the headers. And since for the computers it makes absolutely no difference, I don't actually understand why the computer representation was chosen over the one more familiar to humans.

Similar Threads

  1. __kernel definitions
    By Poornima in forum OpenCL
    Replies: 1
    Last Post: 03-24-2011, 04:03 PM
  2. cl.h please avoid bitshift definitions
    By dbuenzli in forum OpenCL
    Replies: 1
    Last Post: 06-15-2009, 05:14 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •