[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Shader validation and limitations
2010/6/18 Oliver Hunt <email@example.com>:
> On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote:
>> On Fri, Jun 18, 2010 at 08:02, Chris Marrin <firstname.lastname@example.org> wrote:
>>> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases.
>> It seems Ken and/or others investigated this issue in depth months
>> ago, is there any document available demonstrating all shader
>> constructs - besides loops - found to possibly take an extremely long
>> time to run ?
> I believe the trick was to make a very expensive shader, and then throw thousands of large polygons at it.
Or you can just throw a model with a million screen-sized triangles at
a trivial shader.
The shader complexity (off the top of my head) follows
O(geometry * (vertex_shader + fragments_per_geometry * fragment_shader))
Get geometry and fragments_per_geometry high enough (say, 1M each) and
even a discard shader takes a trillion ops to finish.
If the driver or hardware doesn't allow canceling long-running
shaders, it does make defending against this difficult. You'd have to
estimate the max runtime for a shader and geometry at
drawElements/drawArrays -time and if the estimate is high enough,
throw an exception.
I guess you could do a similar attack by creating an html layout with
a few million glyphs on screen and changing the CSS font settings...
Or maybe even just by allocating enough to make the computer swap /
OOM-kill processes. But you don't see many sites like that because
DoSsing your visitors isn't good for traffic.
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: