On Wed, Apr 3, 2013 at 11:15 PM, Kenneth Russell <email@example.com> wrote:
I think the most reliable way to adjust level of detail dynamically
would be to make adjustments based on the application's overall frame
rate over time, rather than measuring the GPU-side execution time of
individual draw calls. What do you think?
Measuring at the frame rate level is insufficient for two reasons:
Framerate is capped at 60fps, you will want to kick in LOD before you start dropping frames.
Not every aspect of a rendering consumes the same time relative on every platform. For instance you might render a bunch of clouds as point sprites (overdraw bound) and terrain (vertex troughput bound). Some platforms might have better fillrate than vertex troughput, others might have better vertex troughput than fillrate for any given application. How do you know if you want to render less clouds or less terrain?