[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings



----- Original Message -----
> >> >> For application authors, there is immense value to be had from
> >> >> being
> >> >> able to determine which card and drivers the user has - both at
> >> >> run
> >> >> time
> >> >> (so the application can work around bugs)
> >> >
> >> > It seems to me that we can realistically aim for good enough
> >> > WebGL
> >> > implementations that this shouldn't be needed. I would be very
> >> > interested in the list of graphics-card-specific things that you
> >> > need to do on Minefield, I would try to handle these as bugs in
> >> > our
> >> > implementation.
> 
> In the case of the example I gave - I don't see how you CAN fix this
> in
> WebGL. There are many areas of OpenGL where this problem arises - I'm
> going to continue to use the vertex texture example - but there are
> MANY
> others.
> 
> In this example, the underlying driver is queried to see how many
> vertex
> textures it supports. There are three kinds of card/driver out there:
> 
> Type (1): Says "0" because its hardware can't support this feature.
> Fair
> enough - it doesn't support vertex textures, so I have to work around
> that
> by doing something different (no object instancing, skeletal meshes
> limited to 30 bones, particle systems computed on the CPU side) - and
> instead of getting 30Hz frame rates, I get only 20Hz...and my
> animation
> isn't quite so slick...but the game is very playable.
> 
> Type (2) Says ">0" because it has hardware support - GREAT! I can use
> vertex textures and get great quality/performance improvements. I get
> a
> 30Hz frame rate and everything looks good.
> 
> Type (3) Says ">0"...because its hardware can't support the feature -
> but
> the driver can drop back to running the entire vertex shader in
> software
> in the CPU and thereby provide vertex shader textures. If I use a
> vertex
> texture on Type(3) hardware, my frame rate drops from ~20/30Hz to
> worse
> than 1Hz!! The game is utterly unplayable. Older nVidia cards do this
> under Vista...I'm sure there are others.
> 
> If I can tell that I have a type(3) card - then I can use the
> fallbacks I
> have for type(1) cards and the game is once again playable.

OK, I understand your concern, but there are ways that we can work around that.

First, we can change our implementation of getParameter(MAX_VERTEX_TEXTURE_IMAGE_UNITS) to return 0 if we know that the hardware won't accelerate that feature.

Alternatively, we can keep the current default behavior but add a new hint, webgl.hint(DONT_WANT_SOFTWARE_FALLBACKS), allowing you to switch to above-described behavior.

So this particular case seems like something that we can rather easily handle in the WebGL implementation.

> The vendors do this nasty thing in order to meet various Microsoft
> certification levels for Vista/Windows-7 and for DX9/10/11
> qualification.
> Those certifications don't ask about performance - only whether some
> feature is implemented. If the certification demands vertex textures -
> then it's in the manufacturer's interests to make vertex textures work
> (no
> matter how crappily) or they won't sell a single GPU to
> DELL/Sony/whoever.

OK, so if basically the system OpenGL implementation may be lying to us in certain cases, we should correct that at the level of the WebGL implementation, see what I propose above (returning 0 on certain hardware).

> You might argue that WebGL could do the detection for me - but at what
> performance threshold should a claimed-to-be-working feature be turned
> off
> by WebGL?

When what we're dealing with is software fallbacks, we have an objective criterion to claim the feature is not usefully supported.

When what we're dealing with is underpowered hardware, this is more subtle, but this issue is not limited to a few features, e.g. low-end hardware may be insufficient for a given app already at the level of fill-rate, so nothing but practical benchmarking will correctly handle that in all cases.

> As for the security/anonymity issue: I bet that if I wrote code to
> read
> back every glGet result and built up a database of the results - and
> wrote
> code to time things like vertex texture performance - then I bet I
> could
> identify most hardware fairly accurately.

Indeed, see my previous email, I am worried about this too and would potentially go as far as restricting possible values for certain getParameter's if needed.

> > Indeed, if app developers really need (that remains to be seen) a
> > way to
> > identify slow graphics cards, we could add a getParameter() pname
> > allowing
> > to get information.
> 
> You miss the point. The performance of a given card/driver/OS
> combination
> isn't a simple one-dimensional thing.

I realize that, but was hoping that a well-chosen one-dimensional projection of this multi-dimensional situation could be good enough.

I realize that it wasn't good enough for your use case with MAX_VERTEX_TEXTURE_IMAGE_UNITS but given that that special case is IMO better handled in the WebGL implementation, I still hope that my proposal can be a good one.

> 
> In the case of the vertex texture issue - some cards are really quite
> fast
> at running everything else - but orders of magnitude slower at doing
> vertex textures. Other cards are slower in other regards - but excel
> at
> doing vertex textures. No single parameter can tell you which.

Right, I understand that now (didn't before), see above.

> 
> > We shouldn't try to detail exactly for each feature if
> > it's fast or slow, because 1) that would be unmanageable for us and
> > 2)
> > that would end up giving away lots of bits of user information.
> 
> But that's precisely what the application NEEDS.

I am trying to exhaust all other possibilities before I accept that the application really needs that ;-)

Cheers,
Benoit
-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: