Results 1 to 2 of 2

Thread: cross platform/gpu floating point precision

  1. #1
    Junior Member
    Join Date
    May 2012
    Posts
    4

    cross platform/gpu floating point precision

    Hi,
    I'm running a particle system double buffering between two floating point textures. After a number of iterations I record the particle positions. Running the simulation a second time yields the same result. The problem occurs when running on a different PC with a different GPU. The particles behave broadly similarly but differ between GPUs. I've tested a 460, 560, quadro 4000 and 5850. I don't mind about precision, as long as I can find a solution to produce the same result for most recent GPUs (given I need the float extension). My initial thought was to set lowp, but this doesn't seem to change the results.

    Is this simply the case that GPU floating point computation is unavoidably done at different precisions or is there something I've missed or could try?

    I'm contemplating running a small floating point test at initialization to determine which precision operations are being performed and adjust my input data accordingly.

    Thanks in advance

  2. #2
    Junior Member
    Join Date
    May 2012
    Posts
    4

    Re: cross platform/gpu floating point precision

    Should there be an "edit" button somewhere :S

    Anyway, I've been running tests back and forth between the 460 and 560. The intermediate precision of floating point operations appears to behave exactly the same, yet I'm still getting differences between the simulations. Both are using firefox on fedora 16. I doubt it's a difference in the default OpenGL state. Perhaps there are rounding differences?

    The float precision statements lowp, mediump and highp seem to have no affect at all. Are these simply a suggestion/hint?

    gl.getParameter(gl.RENDERER) returns a rather useless "Mozilla". Opposed to the c API which gives "GeForce GTX 460/PCIe/SSE2" for example.

    Does anyone know of a set of tests I could run to determine exactly which precision (mantissa/exponent length) and rounding methods are used for different single precision floating point operations?

    Is there some way to force the GPU to use a common set of defaults?

    Does anyone have any further ideas, for example compilers changing multiply divide orders or rounding behaviours of gl_FragColor or inconsistent blending operations or texImage colour conversions or differing rasterizer interpolation algorithms?

Similar Threads

  1. Cross-platform issues MacOSX / Linux
    By John_Idol in forum OpenCL
    Replies: 2
    Last Post: 06-25-2011, 07:26 AM
  2. Floating point precision
    By Rui in forum OpenCL
    Replies: 1
    Last Post: 03-30-2010, 10:11 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •