[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Flat shading without vertex duplication?
- To: Shy Shalom <firstname.lastname@example.org>
- Subject: Re: [Public WebGL] Flat shading without vertex duplication?
- From: Steve Baker <email@example.com>
- Date: Mon, 31 Jan 2011 21:53:22 -0600
- Cc: firstname.lastname@example.org
- Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=sjbaker.org; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; s=sjbaker.org; bh=PUzRy So+Hj2EpVWWl8MoKAGBkKM=; b=kDsirTnsTR9NNDMynVVutwcD9M+ZLnLYFmuuI 0guMexSlawF1Wyu76yRWWLBobMtCrd7MweIc+XG7UsaJh6S7MXRJvcQ4c0mqkfnO 0iyJWm0q+Hh3vl88jcfVCs9JufF6iQ2dFSrdDjRESLOvflhhTkmU6dek8t1aw6D0 y9r1T4=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=sjbaker.org; h=message-id:date :from:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=sjbaker.org; b=LbzY9WquPdv63AXifIFyBixPiKWoOY+mG5PKGWC44b/czgEntsnQ0l5rXlxPE 3uYY9xCoRQ6JPDiXbpt5LbeIJKLHhpv/08MORWHL3KfHC2R/eQuv/UT3+eIcj5ms LLjadoe0u2nHOYhZTgXPFROq485mqBMeYW5rhYFZXkcfZc=
- In-reply-to: <AANLkTimVjMS3eOHwHd0w7DxLLR9mgnp-LyWwkOV4jf+L@mail.gmail.com>
- List-id: Public WebGL Mailing List <public_webgl.khronos.org>
- References: <AANLkTimVjMS3eOHwHd0w7DxLLR9mgnp-LyWwkOV4jf+L@mail.gmail.com>
- Sender: email@example.com
- User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:126.96.36.199) Gecko/20100520 SUSE/3.0.5 Thunderbird/3.0.5
On 01/31/2011 08:03 AM, Shy Shalom wrote:
> Hello list!
> This is probably not the best place for this question so I apologize
> in advance.
> Is there a way in WebGL (and OpenGL ES) to do Flat shading without
> sending all vertices for every triangle?
> From what I managed to find, people do this either with dFdx,dFdy
> which are not available or with "flat" varying which is only a GLSL
> 1.3 thing.
> I have a rather big model I would really prefer to avoid duplicating
> every vertex for every triangle.
It's definitely a hard thing to do. Each vertex shader run is
independent of each other - and it can't know which triangle it's
supplying data for. The pixel shader only sees a blend of the data
produced by the vertex shader runs that created it...so it's really
tough to imagine where you'd put the surface normal information.
The dFdx/dFdy approach can be used to figure out the rate of change of
3D position in the fragment shader - and that, in turn can allow you to
compute a normal...but WebGL doesn't support the ddx/ddy functions...so
that's not going to work.
I'm not familiar with this "flat varying" thing you're talking
about...but I'm pretty sure WebGL doesn't have it.
One thing that WOULD work would be to do a multipass rendering...first
render to an FBO and store the Z values for the pixel into the frame
buffer. Then on the second pass, you can pass that in as a texture to
the fragment shader - which can read a couple of Z values adjacent to
the pixel it's rendering and thereby deduce the normal direction.
Sadly, it fails on the pixels at the edges of triangles - so you tend to
get edges that are kinda rounded off. Also, there are issues at the
profile edges of objects that you have to kinda kludge around. However,
this "normal recovery" approach is used in some games that do
post-effect lighting - and with great care, it can be made to work.
To be honest, rendering your gigantic object twice is probably more
expensive than replicating vertices...but a lot depends on how many
times each vertex is replicated.
Another approach you might want to consider is "Normal mapping". The
idea is to store the surface normals for the object in a texture map.
In the fragment shader, you can use the texture RGB as the normal XYZ
(with appropriate scale & offset) and transform it into screen space in
the fragment shader. Again, it's not a perfect technique, you'll get
"rounded-off" corners and such like. But if it's storage you're
concerned about, it won't save you a thing - there are a lot more pixels
than vertices! However, if you're more worried about data transmission
between CPU and GPU - or vertex shader performance, it's a good trick.
I like it for some sorts of objects where you need a crazy mix of flat
and smooth parts on an object.
Also, you can do things like using a higher vertex count model to
compute the normal map than you render in realtime - and that can be a
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: