[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings
Actual timings on the user's machine are indeed the gold standard for performance measurements. However, slamming the user's machine for a few seconds before they get to see the goodies is not so nice. Whereas a table look-up based on some strings can be practically instantaneous.
Another benefit of having a site like webgl-bench to aggregate performance measurements and index by graphics card is that it can help users diagnose their own performance issues. Also it can point users towards higher-performing browsers/OS's/hardware, and motivate vendors to improve their performance. If it's difficult to tag measurements by GPU, the data will be less complete and less useful. We at least have browser and OS info via userAgent, but for WebGL performance the GPU is by far more important.
On Nov 30, 2010 7:49 PM, "Benoit Jacob" <firstname.lastname@example.org
> ----- Original Message -----
>> Given that apps that are pushing the edge of perf are all going to
>> have different focus areas, can those apps just perform some timing at
>> initialization time and decide for themselves? For example, the
>> difference between using software-emulated vertex shader texture
>> fetches and using a different path should be very noticeable with some
>> judicious use of glFinish(). The app can even cache that data using
>> local storage on the client so that it doesn't have to do it each
> Argh you beat me to it --- this had just occurred to me to. Nothing will beat a real-world test like this,
> t1 = current_time();
> t2 = current_time();
> Although this would have to be repeated a few times, keeping only the best result, to get a reliable result (especially concerned about GC pauses).