[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed

"nofallback" is IMHO both too specific (it should matter by itself whether it's a fallback) and not specific enough (why is a fallback bad?).

Let's summarize the main open questions so far:

A) what's the right API to expose this?
    - option 1) getDeviceAdvisories
    - option 2) new context creation flags that can cause creation to fail if a condition is not met e.g. "allowSoftwareRendering"
    - none of the above?

My main reason for prefering option 1) getDeviceAdvisories is that I prefer to keep separate things that can be separate. I prefer to have a separate pure data getter, getDeviceAdvisories, then let the application do its own logic using that data, then let it create a WebGL context if it wants to. Option 2) tangles these two separate things. Concrete example: if an application will want to create a WebGL context if either of two conditions are met, with Option 2) will require doing two separate getContext calls. Blacklists will have to be processed twice, etc.
Here is a concrete example of how option 2) doesn't allow things that apps will want to do.

Suppose that a browser always honors the default {antialias:true}, for example by implementing FXAA for renderers that don't support MSAA.

Suppose that an application wants antialiasing, but not if the renderer is advertised as 'slow'.

With option 1), the application can do:

  var gl = canvas.getContext("webgl", {async:true});
  if (gl.getDeviceAdvisories().slow)
    gl = canvas.getContext("webgl", {async:true, antialias:false});

Thus the whole negociation can happen without waiting for any actual OpenGL context to be created.

But with option 2), there is no way to check whether a context is 'slow'. Suppose that we correct option 2) by adding a context flag, 'slow', allowing to determine whether the context is slow. Then the negociation would still require waiting on OpenGL context creation:

  var gl = canvas.getContext("webgl");
  var flags = gl.getContextAttributes();
  if (flags.slow && flags.antialias)
    gl = canvas.getContext("webgl", {allowSlow:false});

On a stratospheric level, option 1) is better because it keeps separate things separate.  (getting advisories from a blacklist-like kind of database, vs. creating OpenGL contexts).


B) What's the right "slow/software/fallback" concept to expose as an advisory / context creation requirement?
    - option 1) "slow" / "allowSlow"
    - option 2) "softwareRenderer" / "allowSoftwareRenderer"
    - option 3) "fallback" / "allowFallback" ?


AFAIK, SwiftShader is only used as a fallback is the user's driver is blacklisted.  So how about a context creation flag {"nofallback": true}?  This would indicate not to use any fallback WebGL implementation that might be used if the primary one is blacklisted.  The intent is for SwiftShader, but it avoids mentioning software rendering.


On 3 May 2012 13:13, Benoit Jacob <bjacob@mozilla.com> wrote:

----- Original Message -----
> If the main use case is to allow apps that can be implemented with
> canvas2d to use that version when WebGL would otherwise run through
> software rendering (which would be typically slower then on most
> sensible configs, eg. because of fragment shaders processing), could
> we 'simply' add a WebGL context attribute such as :
> allowSoftwareRendering (default: true)

There are 2 parts in your proposal here:
 1) replace "slow" by "SoftwareRendering"
 2) make it part of context creation flags instead of a new getter

Regarding 1), I wanted to avoid mentioning "software rendering" in the spec because it's tricky to define: all software runs on hardware, so all is hardware-accelerated, after all. The current CPU/GPU split might not be there forever, so the concept of a "GPU" might not be perennial either. That's why I wanted to avoid entering into these details and just said "slow".

Regarding 2), I was hesitating about that. I don't have a firm opinion either way. But there are going to be other flags, so one should think of an API to allow deciding whether to proceed with WebGL based on multiple factors. Such an API seems harder to design properly, so it seems simpler to add getDeviceAdvisories and let the application implement its own logic.

> This would not solve more complex scenarios (eg. VTF slow) but those
> would anyways require WebGL support (any VTF-using app probably
> cannot
> be easily implemented with canvas2d...), so benchmarking for this use
> case should be much less of a problem.

That's not true! Google MapsGL uses VTF.

The key is that an application may want to do fancy things with WebGL while having a much simpler non-WebGL fallback.

> Thoughts?
> On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen <ashley@scirra.com>
> wrote:
> > On 2 May 2012 21:47, Gregg Tavares (勤) <gman@google.com> wrote:
> >>
> >>
> >>
> >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen <ashley@scirra.com>
> >> wrote:
> >>>
> >>> I think this is a great idea and I'm desperate for something like
> >>> this.
> >>>  Our engine implements both a WebGL and Canvas 2D renderer, and
> >>>  currently
> >>> the Canvas 2D renderer is never used in Chrome 18 due to
> >>> Swiftshader.  I am
> >>> keen to fall back to Canvas 2D instead of using Swiftshader but
> >>> there is no
> >>> way to do that.
> >>
> >>
> >> That's a little bit of an exaggeration. You can certainly choose
> >> Canvas 2D
> >> at anytime. You run a small benchmark and switch.
> >
> >
> > We don't make any particular game, we just make an engine.  Are you
> > sure
> > it's possible to make a benchmark script that is 100% accurate for
> > all kinds
> > of games with their varying performance profiles, and does not
> > delay the
> > start of the game by more than a second? How do you know if your
> > benchmark
> > is working properly?  What if one renderer runs faster in some
> > places and
> > slower in others, and the other renderer runs the opposite (faster
> > where the
> > other was slow, slower where the other was faster)?  Which renderer
> > should
> > be picked then?  I'd rather just say: use the GPU.
> >
> > Ashley