Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 31

Thread: Transparent and transparency revisited

  1. #11
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Quote Originally Posted by marcus
    Quote Originally Posted by rogerjames99
    To turn on alpha blending the spec says I have to include a transparent or a transparency term in the technique.
    Hmm... the spec says that "If either <transparent> or <transparency> exists then transparency rendering is activated". Probably "activated" is a bad choice of words since activation/deactivation of a run-time mode is not the point of that section, rather the blending equations are.
    Quote Originally Posted by rogerjames99
    The equations then work out like this

    result.rgb = fb.rgb * (1.0f - 1.0f * 1.0f) + mat.rgb *
    (1.0f * 1.0f)
    result.a = fb.a * (1.0f - 1.0f * 1.0f) + mat.a *
    (1.0f * 1.0f)

    Which simplifies to

    result.rgb = mat.rgb
    result.a = mat.a

    i.e. The visible rgb rendering ignores any alpha value from the image which is rendered opaque.
    Pardon me but mat.a includes the diffuse texture alpha from the <phong> shader. It's not ignored.

    Hope that clears things up (pun intended )
    Mark,

    Groan...the only thing I can think of is that I am misinterpreting what the result variable represents.

    I had assumed that it was what was to be written back into the frame buffer as the end result of both the fragment shading and alpha blending process and that in OpenGL terms I should set up both the fragment shader and alpha blender to acheive this value of result in the frame buffer. Am I incorrect in this? Is result solely the output of the fragment shading process to which a subsequent alpha blending process will be applied? Is it then this subsequent process which takes the value from result.a and uses it to blend the values from result.rgb into the frame buffer according to some other set of equations implied but not defined in the spec?

    That is the only way I can see at the moment that result.a(mat.a) could be used to blend the rgb values from result into the frame buffer in the way I would expect an image with a transparency map encoded in its alpha channel to be handled.

    If this is the case then to save yet another exchange of messages I will ask what to me is the obvious question. What is the subsequent set of alpha blending equations that is implied?

    Thanks for your patience. I think I am getting a bit obsessive about this now and maybe I should leave it for someone else to sort out!

    Roger

  2. #12
    Senior Member
    Join Date
    Aug 2004
    Location
    California
    Posts
    771
    Quote Originally Posted by rogerjames99
    ...the only thing I can think of is that I am misinterpreting what the result variable represents.

    I had assumed that it was what was to be written back into the frame buffer as the end result
    ... of the rendering calculation (blending) described up to that point.
    Quote Originally Posted by rogerjames99
    of both the fragment shading and alpha blending process and that in OpenGL terms I should set up both the fragment shader and alpha blender to acheive this value of result in the frame buffer. Am I incorrect in this?
    Firstly, this is a great conversation. Thanks for sharing your thoughts as you have highlighted spec bugs and areas that need clarification as we reach a consensus.

    Okay so, COLLADA <profile_COMMON> is not describing OpenGL operations so your position statement is too uhm pedantic. Yes you are trying to map it to OpenGL fixed-function pipeline and yes that is subject to interpretation. We are exploring how best to do that....
    Quote Originally Posted by rogerjames99
    Is result solely the output of the fragment shading process to which a subsequent alpha blending process will be applied?
    Yes I think we had established that with a spec bug against <transparent> and <reflective> "layers" within the <phong> and <blinn> shaders. OpenGL doesn't handle those extra two (software renderer) layers right?

    So you can ignore those layers (like the ColladaLoader does iirc) in your approximation or as (I think) we have been doing... figure out how the transparent part fits in the OpenGL context. Let's back up and review the set of assertions and see where we are still diverging:
    1. The shader (e.g. <phong>) "surface layer" calculation yields the "mat" values. This does include alpha values from e.g. <diffuse>.[/*:m:2f205hme]
    2. The shader "transparent layer" calculation yields the "result" values, blending the surface and transparent layers. The <transparent> colors are not included in "mat".[/*:m:2f205hme]
    3. Ignoring <reflective> for now.[/*:m:2f205hme]
    4. Making "result" (pg 249) the final value of interest for OpenGL fix-function.[/*:m:2f205hme]

    I think then we can agree on a good combination of glBlendFunc and/or glTexEnv for your implementation.

  3. #13
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Mark,

    It is the end of the day here now so only have time for a quick reply. I think we are now closing in on a solution. I will need to consider your response more closely tomorrow and respond more fully then. I fully realise that Collada is a specification for the exchange of descriptions of 3d digital assets and as a specification should as far as possible be implementation neutral. What follows are some of my ramblings about syntax, semantics, and linguistic exchanges.

    What I have always struggled with is that what the specification describes in detail is mostly the syntax of the dae information interchange and not the semantics of the information that is being exchanged. The developers of the specification, yourself included, must have reached a consensus on a common abstract semantic model. This model may only ever have existed in the shared experience of working group and probably could never fully be documented. But enough of that semantic model must be communicated in the specification in order for implementors such as myself to understand how to translate the information conveyed in the dae exchange into a concrete implementation.

    A large number of the implementers of dae readers and writers will be people using dae to exchange content between content creation tools (max, maya, blender, etc). These implementors will probably be translating between the dae semantic model and their own semantic model for describing 3d assets. A smaller number of implementors such as as myself will be working on scene graph importers/exporters which will be translating to and from a much more restrictive 3d rendering pipeline such as OpenGL.

    I would suggest that the point we all converge is at "how things look". So maybe a description of the abstract "Collada rendering pipeline" for want of a better term is what is needed.

    Can I suggest that we use a standard example for any future discussions on blending we have. Let us assume for a start (more complex scenarios can come later) that the frame buffer contains solid opaque encoded RGBA like this 0.0 0.0 1.0 1.0, that I have an image of a green and black chequerboard where green is opaque and encoded like this 0.0 1.0 0.0 1.0 and the black is transparent encoded like this 0.0 .0 0.0 0.0. I want to use this image as a material and what I want to see eventually in the frame buffer is a green and blue chequerboard.

    Just one final point and an OpenGL one I am afraid. As you may be aware standard OpenGL alpha blending is performed in the pipeline after the pieces of fixed functionality that can be replaced by a programmable shader. You can mimic it in a programmable shader but then it may well be applied unnecessarily to fragments that would be discarded by one of the fragment logical tests that occur after the programmable shader has run but before alpha blending would normally be performed. So there are good reasons to use the fixed alpha blending functions if one can.

    This is longer than I intended. Time I was not here!

    Roger

  4. #14
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Mark,

    I have had time to sleep on this now. Although I admit I spent some time awake going over it! I will respond to your points in what I hope is a more logical order and not the order they appear in the reply (I realise that that order was determined by my original message). To avoid confusion with the result variable we have been talking about I am going to use the term outcome to denote the eventual visual result of whatever rendering process is used to view the model, be this a rasterizer or a ray tracer or some other process.
    Quote Originally Posted by marcus
    Quote Originally Posted by rogerjames99
    Is result solely the output of the fragment shading process to which a subsequent alpha blending process will be applied?
    Yes I think we had established that with a spec bug against <transparent> and <reflective> "layers" within the <phong> and <blinn> shaders. OpenGL doesn't handle those extra two (software renderer) layers right?
    I think you are answering yes to my question here. I think this is the moot point. If you are actually answering no then you can ignore most of the rest of this message and skip to the bottom. As an aside, I can only access the public bugzilla and the bug that I think you are referring to only contains a reference to a bug in the private bugzilla so I cannot see what is in it. But putting that aside. If you are answering yes then that leads me to two ancillary questions/observations.

    i. What are the equations for the subsquent alpha blending process, do the values of the transparent and transparency elements from the technique effect them, especially the value of opaque. These equations need to be specified if implementors are to map the abstract collada rendering model into their own rendering model.

    ii. The "subsequent blending process" I refer to will determine how the shaded (with "result") mesh is blended with the current contents of "outcome" (in my case the frame buffer). In view of this and the following two statements.
    Quote Originally Posted by marcus
    1. The shader (e.g. <phong>) "surface layer" calculation yields the "mat" values. This does include alpha values from e.g. <diffuse>.[/*:m:a9cg41ct]
    2. The shader "transparent layer" calculation yields the "result" values, blending the surface and transparent layers. The <transparent> colors are not included in "mat".[/*:m:a9cg41ct]
    Then I am slightly surprised by the inclusion of a fb(framebuffer) variable in the "transparent layer" equations. Your answer to this could of course be, "That is the way we want it to be". But that would make it difficult from my point of view to map onto OpenGL fixed functionality.

    Quote Originally Posted by marcus
    Quote Originally Posted by rogerjames99
    ...the only thing I can think of is that I am misinterpreting what the result variable represents.

    I had assumed that it was what was to be written back into the frame buffer as the end result
    ... of the rendering calculation (blending) described up to that point.
    Your statement here does not appear consistent with a yes answer to my "subsequent alpha blending process" question.
    Quote Originally Posted by marcus
    1. The shader (e.g. <phong>) "surface layer" calculation yields the "mat" values. This does include alpha values from e.g. <diffuse>.[/*:m:a9cg41ct]
    2. The shader "transparent layer" calculation yields the "result" values, blending the surface and transparent layers. The <transparent> colors are not included in "mat".[/*:m:a9cg41ct]
    3. Ignoring <reflective> for now.[/*:m:a9cg41ct]
    4. Making "result" (pg 249) the final value of interest for OpenGL fix-function.[/*:m:a9cg41ct]

    I think then we can agree on a good combination of glBlendFunc and/or glTexEnv for your implementation.
    a. Yes agreed.
    b. Yes agreed, but subject to my comments about the inclusion of the frame buffer in the equations.
    c. Yes agreed.
    d. No. This is not consistent with your yes answer above. I would say this makes result the input to a subsequent blending process which determines the value of "outcome". In OpenGL terms this would be the fixed functionality alpha blending process which takes the current contents of the frame buffer, the shaded incoming fragment (result), and blends them back into the frame buffer.

    To help us proceed could you bear with me and please tell me how you would write a collada phong technique that would produce the "outcome" I described for my test model in my previous post.

    Roger

  5. #15
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Mark,

    I hope my last set of replies did not put you off completely. I have been looking at the documentation for 3ds max and Maya and trying to get an understanding of things from a content creators point of view. The more I look at that documentation the more I think that the equations in the spec are a little misleading. I think they need to be split into two. Using the excellent diagram (Figure 5.5) on page 94 of your book, I would suggest that the first set should describe how the various terms are used in the "Fragment Shader" phase and the second set should describe how they are used (or effect) the subsequent "Output Merger" phase. That would also help my understanding about how these thing should work in profiles other than the common profile, especially where those profiles permit multiple techniques per effect. I would also really appreciate any comments you have on my previous posts.

    Roger

  6. #16
    Senior Member
    Join Date
    Aug 2004
    Location
    California
    Posts
    771
    Quote Originally Posted by rogerjames99
    What I have always struggled with is that what the specification describes in detail is mostly the syntax of the dae information interchange and not the semantics of the information that is being exchanged. The developers of the specification, yourself included, must have reached a consensus on a common abstract semantic model. This model may only ever have existed in the shared experience of working group and probably could never fully be documented. But enough of that semantic model must be communicated in the specification in order for implementors such as myself to understand how to translate the information conveyed in the dae exchange into a concrete implementation.
    Hi Roger, thanks for sharing your thoughts.

    COLLADA carries information along a content pipeline from source (e.g. DCC) to sink (e.g. game engine data) in an ideally policy free manner. We want to transport the data and meta data as neutrally as possible without dictating how it is used by tools in that pipeline. This has mostly been accomplished other then in the vendor specific effects profiles (e.g. GLSL) that actually can define a specific implementation's rendering configuration.
    Quote Originally Posted by rogerjames99
    I would suggest that the point we all converge is at "how things look". So maybe a description of the abstract "Collada rendering pipeline" for want of a better term is what is needed.
    COLLADA strives not to say how things look to that degree. It's ok for you to take a visual scene and render it how ever you like for your use-case, interpreting as much of the information as you want to process. A tool that exports COLLADA should include enough information so that it's own semantic model can be conveyed to subsequent tools (including re-importing). COLLADA is supposed to be flexible and extensible enough to support this model of semantics without ownership of them.

    I promise to return to this thread soon to continue the conversation.

  7. #17
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Quote Originally Posted by marcus
    I promise to return to this thread soon to continue the conversation.
    Mark,

    Thanks for your reply. I realise that you have many things to do other than responding to my ramblings . I look forward to picking up this conversation in due course. I have actually enjoyed the break, I was starting to wake up in the night thinking about this!

    Roger

  8. #18
    Senior Member
    Join Date
    Aug 2004
    Location
    California
    Posts
    771
    Quote Originally Posted by rogerjames99
    Can I suggest that we use a standard example for any future discussions on blending we have. Let us assume for a start (more complex scenarios can come later) that the frame buffer contains solid opaque encoded RGBA like this 0.0 0.0 1.0 1.0, that I have an image of a green and black chequerboard where green is opaque and encoded like this 0.0 1.0 0.0 1.0 and the black is transparent encoded like this 0.0 .0 0.0 0.0. I want to use this image as a material and what I want to see eventually in the frame buffer is a green and blue chequerboard.
    I think this example needs to be restated in terms of geometry and materials to fit into the context of this thread. Otherwise this could be considered a full-screen effect and that is something else altogether.

    For example, as geometry, consider two full screen quads that each have a material that is a <constant> shader with <emission><color> 0.0 0.0 1.0 1.0 </color></emission> and <emission><texture texture="green_checkerboard.png texcoord="#my_texcords" /></emission> respectively, and drawn in that order.

  9. #19
    Senior Member
    Join Date
    Aug 2004
    Location
    California
    Posts
    771
    Quote Originally Posted by rogerjames99
    To avoid confusion with the result variable we have been talking about I am going to use the term outcome to denote the eventual visual result of whatever rendering process is used to view the model, be this a rasterizer or a ray tracer or some other process.
    Okay.
    Quote Originally Posted by rogerjames99
    Quote Originally Posted by marcus
    Quote Originally Posted by rogerjames99
    Is result solely the output of the fragment shading process to which a subsequent alpha blending process will be applied?
    Yes I think we had established that with a spec bug against <transparent> and <reflective> "layers" within the <phong> and <blinn> shaders. OpenGL doesn't handle those extra two (software renderer) layers right?
    I think you are answering yes to my question here. I think this is the moot point. If you are actually answering no then you can ignore most of the rest of this message and skip to the bottom.
    I was answering 'yes' with caveat because you asked "solely the output of the fragment shading process". That's fairly restrictive and implementation centric.

    What I did want to convey is that, to me at least and subject to concurrence, we have been discussing: composition of visual layers, COLLADA's <profile_COMMON> data model for that, and a ultimately a mapping to OpenGL fixed-function that you can use. We are identifying areas of the COLLADA spec that needs clarification.
    Quote Originally Posted by rogerjames99
    As an aside, I can only access the public bugzilla and the bug that I think you are referring to only contains a reference to a bug in the private bugzilla so I cannot see what is in it.
    /quote]
    I'll see what I can copy over to the public bug.
    Quote Originally Posted by rogerjames99
    i. What are the equations for the subsequent alpha blending process, do the values of the transparent and transparency elements from the technique effect them, especially the value of opaque. These equations need to be specified if implementors are to map the abstract collada rendering model into their own rendering model.
    The COLLADA common profile describes (i.e. <constant> is simplest) three layers of appearance of geometry: surface, transparent, reflective. The spec has equations for two of the layers: surface (e.g. <constant>) and transparent (e.g. pg 249).

    We've identified that an equation that adds in the reflective layer is missing from the spec. Missing recognition of that layer, the transparent equation calls it's result "framebuffer" when it is actually just a "layer result" of blending surface + transparent layers. Given that many renderers do not have additional layers, this might be the final result of the framebuffer in many cases.

    The common profile is fairly simple so would it be enough to say that the abstract pipeline a simple composition of F = S + T + R layers? Plus <extra> layers too?

  10. #20
    Junior Member
    Join Date
    Nov 2008
    Location
    Lancaster - UK
    Posts
    24
    Quote Originally Posted by marcus
    Quote Originally Posted by rogerjames99
    Can I suggest that we use a standard example for any future discussions on blending we have. Let us assume for a start (more complex scenarios can come later) that the frame buffer contains solid opaque encoded RGBA like this 0.0 0.0 1.0 1.0, that I have an image of a green and black chequerboard where green is opaque and encoded like this 0.0 1.0 0.0 1.0 and the black is transparent encoded like this 0.0 .0 0.0 0.0. I want to use this image as a material and what I want to see eventually in the frame buffer is a green and blue chequerboard.
    I think this example needs to be restated in terms of geometry and materials to fit into the context of this thread. Otherwise this could be considered a full-screen effect and that is something else altogether.

    For example, as geometry, consider two full screen quads that each have a material that is a <constant> shader with <emission><color> 0.0 0.0 1.0 1.0 </color></emission> and <emission><texture texture="green_checkerboard.png texcoord="#my_texcords" /></emission> respectively, and drawn in that order.
    Agreed, but I would extend the example further to avoid confusion over what you mean by full screen. Can we say that our example is a collada <visual_scene> containing the two quad geometries and an orthographic camera arranged in such a way that the camera is looking directly at the quad with the checker board texture and the plain quad is positioned a little way behind the textured quad.

    I think this results in a document simliar to this one I created using skethcup and hand edited (I did not bother to change the triangles to quads, but the idea is there I think)

    <?xml version="1.0" encoding="utf-8"?>
    <COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1">
    <asset>
    <unit name="meters" meter="1.0"/>
    <up_axis>Z_UP</up_axis>
    </asset>
    <library_images>
    <image id="material_1_1_0-image" name="material_1_1_0-image">
    <init_from>chequerboard.png</init_from>
    </image>
    </library_images>
    <library_materials>
    <material id="material_0_0ID" name="material_0_0">
    <instance_effect url="#material_0_0-effect"/>
    </material>
    <material id="material_1_1_0ID" name="material_1_1_0">
    <instance_effect url="#material_1_1_0-effect"/>
    </material>
    </library_materials>
    <library_effects>
    <effect id="material_0_0-effect" name="material_0_0-effect">
    <profile_COMMON>
    <technique sid="COMMON">
    <constant>
    <emission>
    <color>0.000000 0.000000 1.000000 1</color>
    </emission>
    </constant>
    </technique>
    </profile_COMMON>
    </effect>
    <effect id="material_1_1_0-effect" name="material_1_1_0-effect">
    <profile_COMMON>
    <newparam sid="material_1_1_0-image-surface">
    <surface type="2D">
    <init_from>material_1_1_0-image</init_from>
    </surface>
    </newparam>
    <newparam sid="material_1_1_0-image-sampler">
    <sampler2D>
    <source>material_1_1_0-image-surface</source>
    </sampler2D>
    </newparam>
    <technique sid="COMMON">
    <constant>
    <emission>
    <texture texture="material_1_1_0-image-sampler" texcoord="UVSET0"/>
    </emission>
    <transparency>
    <float>1.000000</float>
    </transparency>
    </constant>
    </technique>
    </profile_COMMON>
    </effect>
    </library_effects>
    <library_geometries>
    <geometry id="mesh1-geometry" name="mesh1-geometry">
    <mesh>
    <source id="mesh1-geometry-position">
    <float_array id="mesh1-geometry-position-array" count="12">1.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 1.000000 0.000000 0.000000 </float_array>
    <technique_common>
    <accessor source="#mesh1-geometry-position-array" count="4" stride="3">
    <param name="X" type="float"/>
    <param name="Y" type="float"/>
    <param name="Z" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <source id="mesh1-geometry-normal">
    <float_array id="mesh1-geometry-normal-array" count="3">-0.000000 -0.000000 1.000000 </float_array>
    <technique_common>
    <accessor source="#mesh1-geometry-normal-array" count="1" stride="3">
    <param name="X" type="float"/>
    <param name="Y" type="float"/>
    <param name="Z" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <source id="mesh1-geometry-uv">
    <float_array id="mesh1-geometry-uv-array" count="8">-39.370079 39.370079 0.000000 0.000000 0.000000 39.370079 -39.370079 0.000000 </float_array>
    <technique_common>
    <accessor source="#mesh1-geometry-uv-array" count="4" stride="2">
    <param name="S" type="float"/>
    <param name="T" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <vertices id="mesh1-geometry-vertex">
    <input semantic="POSITION" source="#mesh1-geometry-position"/>
    </vertices>
    <triangles material="material_0_0" count="2">
    <input semantic="VERTEX" source="#mesh1-geometry-vertex" offset="0"/>
    <input semantic="NORMAL" source="#mesh1-geometry-normal" offset="1"/>
    <input semantic="TEXCOORD" source="#mesh1-geometry-uv" offset="2" set="0"/>


    0 0 0 1 0 1 2 0 2 1 0 1 0 0 0 3 0 3 </p>
    </triangles>
    </mesh>
    </geometry>
    <geometry id="mesh2-geometry" name="mesh2-geometry">
    <mesh>
    <source id="mesh2-geometry-position">
    <float_array id="mesh2-geometry-position-array" count="12">1.000000 0.000000 1.000000 0.000000 1.000000 1.000000 0.000000 0.000000 1.000000 1.000000 1.000000 1.000000 </float_array>
    <technique_common>
    <accessor source="#mesh2-geometry-position-array" count="4" stride="3">
    <param name="X" type="float"/>
    <param name="Y" type="float"/>
    <param name="Z" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <source id="mesh2-geometry-normal">
    <float_array id="mesh2-geometry-normal-array" count="3">0.000000 0.000000 1.000000 </float_array>
    <technique_common>
    <accessor source="#mesh2-geometry-normal-array" count="1" stride="3">
    <param name="X" type="float"/>
    <param name="Y" type="float"/>
    <param name="Z" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <source id="mesh2-geometry-uv">
    <float_array id="mesh2-geometry-uv-array" count="8">1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 1.000000 </float_array>
    <technique_common>
    <accessor source="#mesh2-geometry-uv-array" count="4" stride="2">
    <param name="S" type="float"/>
    <param name="T" type="float"/>
    </accessor>
    </technique_common>
    </source>
    <vertices id="mesh2-geometry-vertex">
    <input semantic="POSITION" source="#mesh2-geometry-position"/>
    </vertices>
    <triangles material="material_1_1_0" count="2">
    <input semantic="VERTEX" source="#mesh2-geometry-vertex" offset="0"/>
    <input semantic="NORMAL" source="#mesh2-geometry-normal" offset="1"/>
    <input semantic="TEXCOORD" source="#mesh2-geometry-uv" offset="2" set="0"/>


    0 0 0 1 0 1 2 0 2 1 0 1 0 0 0 3 0 3 </p>
    </triangles>
    </mesh>
    </geometry>
    </library_geometries>
    <library_cameras>
    <camera id="Camera-camera" name="Camera-camera">
    <optics>
    <technique_common>
    <orthographic>
    <xmag>1.862633</xmag>
    <ymag>1.396975</ymag>
    <znear>0.025400</znear>
    <zfar>25.400000</zfar>
    </orthographic>
    </technique_common>
    </optics>
    </camera>
    </library_cameras>
    <library_visual_scenes>
    <visual_scene id="SketchUpScene" name="SketchUpScene">
    <node id="Model" name="Model">
    <node id="mesh1" name="mesh1">
    <instance_geometry url="#mesh1-geometry">
    <bind_material>
    <technique_common>
    <instance_material symbol="material_0_0" target="#material_0_0ID"/>
    </technique_common>
    </bind_material>
    </instance_geometry>
    </node>
    <node id="mesh2" name="mesh2">
    <instance_geometry url="#mesh2-geometry">
    <bind_material>
    <technique_common>
    <instance_material symbol="material_1_1_0" target="#material_1_1_0ID">
    <bind_vertex_input semantic="UVSET0" input_semantic="TEXCOORD" input_set="0"/>
    </instance_material>
    </technique_common>
    </bind_material>
    </instance_geometry>
    </node>
    </node>
    <node id="Camera" name="Camera">
    <matrix>
    -0.000159 1.000000 0.000336 0.505289
    -1.000000 -0.000159 -0.000000 0.575529
    0.000000 -0.000336 1.000000 2.717795
    0.000000 0.000000 0.000000 1.000000
    </matrix>
    <instance_camera url="#Camera-camera"/>
    </node>
    </visual_scene>
    </library_visual_scenes>
    <scene>
    <instance_visual_scene url="#SketchUpScene"/>
    </scene>
    </COLLADA>

    Roger

Page 2 of 4 FirstFirst 1234 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •