[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings



>> >> For application authors, there is immense value to be had from
>> >> being
>> >> able to determine which card and drivers the user has - both at run
>> >> time
>> >> (so the application can work around bugs)
>> >
>> > It seems to me that we can realistically aim for good enough WebGL
>> > implementations that this shouldn't be needed. I would be very
>> > interested in the list of graphics-card-specific things that you
>> > need to do on Minefield, I would try to handle these as bugs in our
>> > implementation.

In the case of the example I gave - I don't see how you CAN fix this in
WebGL.  There are many areas of OpenGL where this problem arises - I'm
going to continue to use the vertex texture example - but there are MANY
others.

In this example, the underlying driver is queried to see how many vertex
textures it supports.  There are three kinds of card/driver out there:

Type (1): Says "0" because its hardware can't support this feature.  Fair
enough - it doesn't support vertex textures, so I have to work around that
by doing something different (no object instancing, skeletal meshes
limited to 30 bones, particle systems computed on the CPU side) - and
instead of getting 30Hz frame rates, I get only 20Hz...and my animation
isn't quite so slick...but the game is very playable.

Type (2) Says ">0" because it has hardware support - GREAT!  I can use
vertex textures and get great quality/performance improvements.  I get a
30Hz frame rate and everything looks good.

Type (3) Says ">0"...because its hardware can't support the feature - but
the driver can drop back to running the entire vertex shader in software
in the CPU and thereby provide vertex shader textures. If I use a vertex
texture on Type(3) hardware, my frame rate drops from ~20/30Hz to worse
than 1Hz!!  The game is utterly unplayable.  Older nVidia cards do this
under Vista...I'm sure there are others.

If I can tell that I have a type(3) card - then I can use the fallbacks I
have for type(1) cards and the game is once again playable.


The vendors do this nasty thing in order to meet various Microsoft
certification levels for Vista/Windows-7 and for DX9/10/11 qualification. 
Those certifications don't ask about performance - only whether some
feature is implemented.  If the certification demands vertex textures -
then it's in the manufacturer's interests to make vertex textures work (no
matter how crappily) or they won't sell a single GPU to DELL/Sony/whoever.

Hence, appealing to the card/driver vendors to please stop reporting >0
vertex textures when the hardware doesn't support them is going to fall on
deaf ears...they aren't going to help out WebGL by fixing their drivers
because doing so costs them the ability to sell their chips on Windows-7
machines.

But in type(3), I desperately need to figure this out so I can avoid using
vertex textures and get 20Hz performance instead of 1Hz...and I can only
do that by querying the VENDOR/RENDERER strings.

You might argue that WebGL could do the detection for me - but at what
performance threshold should a claimed-to-be-working feature be turned off
by WebGL?   It can't know because it depends on the usage pattern - which
is something that only the application can decide.  If your application
was using vertex textures in very simple shaders - then drivers that fall
back on CPU-side shaders might still be faster than doing the work in
JavaScript - in those applications, you wouldn't want WebGL stopping you
from using vertex textures even on type(3) hardware.


As for the security/anonymity issue:  I bet that if I wrote code to read
back every glGet result and built up a database of the results - and wrote
code to time things like vertex texture performance - then I bet I could
identify most hardware fairly accurately.  Perhaps not the exact card and
exact driver version - but for general "anonymous user identification"
purposes, I'd get maybe 80% of what reading VENDOR and RENDERER would give
me.

So is it really worth inconveniencing the WebGL application writer for
such a small win?

Right now - my only way to react to the vertex texture situation is to
assume that WebGL doesn't support vertex textures at all - take the 10Hz
framerate hit on cards that really do support it in order to avoid making
my game be completely unusable on machines that merely claim to support it
and then do some god-awful software fallback.

> Indeed, if app developers really need (that remains to be seen) a way to
> identify slow graphics cards, we could add a getParameter() pname allowing
> to get information.

You miss the point.  The performance of a given card/driver/OS combination
isn't a simple one-dimensional thing.

In the case of the vertex texture issue - some cards are really quite fast
at running everything else - but orders of magnitude slower at doing
vertex textures.  Other cards are slower in other regards - but excel at
doing vertex textures.  No single parameter can tell you which.

> We shouldn't try to detail exactly for each feature if
> it's fast or slow, because 1) that would be unmanageable for us and 2)
> that would end up giving away lots of bits of user information.

But that's precisely what the application NEEDS.  Merely knowing some
overall benchmark for the card tells me nothing whatever of value...it
might tell me that the end user's frame rate is going to be kinda poor -
but it won't tell me whether I can dramatically improve it by changing one
very specific aspect of my code.

If an overall performance metric is the only thing that's on offer - don't
bother.  I wouldn't use it if it existed.

  -- Steve



-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: