[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Ambiguity and Non-deterministicness in the WebGL Spec



> I think in most cases the spec needs to be very rigorous in its compliance
> requirements. But in cases where performance is at stake I think it would
> be the bigger mistake to require exact compliance.
>
> -----
> ~Chris
> cmarrin@apple.com

FWIW, in the early days of OpenGL on the PC desktop (back when Voodoo-1
was a good graphics card!) many people would ignore the bit in the OpenGL
spec that said that the contents of the back buffer would be undefined
following a buffer swap.  Since the system reliably retained the
previous-but-one frame of data on both of the mainstream graphics cards of
the day - they'd "get away with it".  The result was indeed exactly the
mess predicted here when new hardware came out and that behavior stopped
working...because "undefined" doesn't mean "well, it works OK for me so it
must be alright".

However, programmers did get over that very quickly once there was some
diversity in the marketplace - and "undefined" leaves it open to the
driver/browser author to do the optimum thing - so that's obviously where
the best performance path lies.

So we're walking the delicate path between having broken apps on early
cellphone implementations versus slowing everyone down for the sake of
everything running the same everywhere.

I'd have to come down on the side of performance - and to hell with the
people who didn't read the spec and didn't test their app on a broad
enough range of devices.  No matter what we do, there WILL be broken apps
when we transition into cellphones.  If for no other reason than they are
going to be pitifully slow - and if you don't slim down your meshes and
hold back on the glitzy post-effects, you're toast.

You can't write a program that you KNOW in advance will "just work" on a
cellphone.  For example - you can read the maximum number of instructions
that the frag shader is allowed to contain...but if you're writing in
GLSL, you have NO WAY to know how many instructions your code will produce
when it's compiled on some device that you've never tested it on.  So what
I do is to make a "high end" shader and a drastically simplified one (no
normal mapping, for example) - and if the high end one doesn't
compile/link - then I re-try with the simpler one.   But what if my
"simple" isn't simple enough?

So we WILL have lots of applications that either fail miserably or are too
slow to be usable when people start aggressively using WebGL on phones.

So I don't see a heck of a lot of point in doing something potentially
costly to performance for the sake of determinism - when poorly tested
apps are going to fail in large numbers anyway.  It's no different than
when someone writes some HTML that only works on Internet Explorer.


However, I think it's a bad idea to rely on the state of the buffer once
you've handed it over to the compositor - and having a mandatory clearing
of the screen isn't really much of an obstacle to a well-written app.

My only concern is that if the system clears to some default color and I
have to clear the screen again to get it to be the color I want.  That's
nasty and wasteful-by-design.  If we could only agree that the system
forcably clears the screen to the application-specified
color/depth/stencil settings while ignoring the
colorMask/depthMask/scissor setting - then there shouldn't be any great
problems with that.

About the best thing this group can do to help this situation is to push
out WebGL implementations to cellphones as fast as possible so everyone
can test their apps on these low end systems before the spec "goes gold".

  -- Steve




-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: