I think you probably guess that I am going to say no. Let's run the example I outlined in my previous reply through it.Originally Posted by marcus
Firstly lets get a precise definition of what we mean by the framebuffer term. I said before I would prefer to call it outcome but framebuffer is probably more easily recognised. Both refer to the current colour of a point in the 2d coordinate space.
So looking at the geometry that is rendered first. The points in the 2d coordinate space which correspond to this geometry would receive
F = S
F = 0 0 1 1
The geometry that is rendered second is more complex because some points in the 2d coordinate space will receive colour from a part of the texture where the colour is 0 1 0 1 and some will get a colour of 0 0 0 0. Because of the way we have arranged our orthographic camera the 3d points of the second geometry map onto the same points in 2d space as the first geometry. So taking the 0 1 0 1 colour first and using the equations from page 249 as a definition of how the compositon S + T is going to work (I think that is what you meant rather than just a simple addition). We have values for <transparency> of 1 and <transparent> 1 1 1 1 (the default).
F = (0 0 1 1 )* (1 - 1 * 1) + (0 1 0 1) * (1 * 1)
F = 0 1 0 1
and for the 0 0 0 0 points
F = (0 0 1 1) * (1 - 1 * 1) + (0 0 0 0) * ( 1 * 1)
F = 0 0 0 0
i.e. a green and black chequerboard rather than green and blue one.
So I would argue that for things to work as expected our abstract pipeline must include a an "output merger" phase as you show in figure 5.5 on page 94 of your book, and we need to know how that merger phase is to operate (I would also suggest that the "layer result" equations should not include framebuffer terms, but let's leave that for later).