[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Flat shading without vertex duplication?



The answer is that it can do it - you just have to replicate the vertex
data.  But since we have way more memory, bandwidth and GPU performance
than we had in those early days, we're much MORE able to do flat shaded
rendering than we were back then.

What you're really saying is that the performance of flat-shaded rendering
hasn't improved as fast as smooth-shaded rendering has...which is
certainly true.  But that accurately reflects the indisputable fact that
vastly more people care about smooth-shading than care about flat-shading.

In a market-driven economy, he who buys the most hardware gets to
determine what runs fastest!

Since the overwhelming number of decent graphics cards are bought by
gamers and not by people solving CAD problems - the development is
inevitably going to be heavily biassed towards phong shaded, beautifully
lit, textured, bump-mapped triangles - where vertex sharing is a
non-issue. That we still have line drawing at all is somewhat miraculous.

Back in the 1980's, we were trying to run games (well, simulation, in my
case) on machines that were designed to run CAD...and it was tough.  We
had hardware that was burdened by all of these weird special-case needs
(not-Z-buffered antialiassed line drawing and support for dotted and
dashed lines for example) - which soaked up hardware that could have been
used to do nicer lighting or texturing.

The tables have turned!  HA!

  -- Steve

> Does anyone here find it slightly ironic that WebGL can't easily do
> something that the earlier graphics cards/APIs from SGI did (circa 1990)
> without complex and likely extremely inefficient operation? I'm not saying
> it is a deficiency, just a humorous observation, though come to think of
> it,
> it does sort of limit WebGL's adoption as used in as a 3D modeling
> package.
>
> I can't think of a simple solution to your problem without duplicating
> vertex data. You really need a triangle level shader to output a single
> color attribute for each vertex to implement flat shading in a sane/easy
> way. :\
>
>
> On Mon, Jan 31, 2011 at 9:53 PM, Steve Baker <steve@sjbaker.org> wrote:
>
>>
>> On 01/31/2011 08:03 AM, Shy Shalom wrote:
>> > Hello list!
>> > This is probably not the best place for this question so I apologize
>> > in advance.
>> >
>> > Is there a way in WebGL (and OpenGL ES) to do Flat shading without
>> > sending all vertices for every triangle?
>> > From what I managed to find, people do this either with dFdx,dFdy
>> > which are not available or with "flat" varying which is only a GLSL
>> > 1.3 thing.
>> > I have a rather big model I would really prefer to avoid duplicating
>> > every vertex for every triangle.
>> >
>> It's definitely a hard thing to do.  Each vertex shader run is
>> independent of each other - and it can't know which triangle it's
>> supplying data for.  The pixel shader only sees a blend of the data
>> produced by the vertex shader runs that created it...so it's really
>> tough to imagine where you'd put the surface normal information.
>>
>> The dFdx/dFdy approach can be used to figure out the rate of change of
>> 3D position in the fragment shader - and that, in turn can allow you to
>> compute a normal...but WebGL doesn't support the ddx/ddy functions...so
>> that's not going to work.
>>
>> I'm not familiar with this "flat varying" thing you're talking
>> about...but I'm pretty sure WebGL doesn't have it.
>>
>> One thing that WOULD work would be to do a multipass rendering...first
>> render to an FBO and store the Z values for the pixel into the frame
>> buffer.   Then on the second pass, you can pass that in as a texture to
>> the fragment shader - which can read a couple of Z values adjacent to
>> the pixel it's rendering and thereby deduce the normal direction.
>> Sadly, it fails on the pixels at the edges of triangles - so you tend to
>> get edges that are kinda rounded off.  Also, there are issues at the
>> profile edges of objects that you have to kinda kludge around.  However,
>> this "normal recovery" approach is used in some games that do
>> post-effect lighting - and with great care, it can be made to work.
>>
>> To be honest, rendering your gigantic object twice is probably more
>> expensive than replicating vertices...but a lot depends on how many
>> times each vertex is replicated.
>>
>> Another approach you might want to consider is "Normal mapping".  The
>> idea is to store the surface normals for the object in a texture map.
>> In the fragment shader, you can use the texture RGB as the normal XYZ
>> (with appropriate scale & offset) and transform it into screen space in
>> the fragment shader.   Again, it's not a perfect technique, you'll get
>> "rounded-off" corners and such like.   But if it's storage you're
>> concerned about, it won't save you a thing - there are a lot more pixels
>> than vertices!   However, if you're more worried about data transmission
>> between CPU and GPU - or vertex shader performance, it's a good trick.
>> I like it for some sorts of objects where you need a crazy mix of flat
>> and smooth parts on an object.
>>
>> Also, you can do things like using a higher vertex count model to
>> compute the normal map than you render in realtime - and that can be a
>> massive saving.
>>
>> Good luck!
>>
>>  -- Steve
>>
>>
>>
>>  -- Steve
>>
>> -----------------------------------------------------------
>> You are currently subscribed to public_webgl@khronos.org.
>> To unsubscribe, send an email to majordomo@khronos.org with
>> the following command in the body of your email:
>> unsubscribe public_webgl
>> -----------------------------------------------------------
>>
>>
>



-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------