Results 1 to 2 of 2

Thread: GPU friendly data

  1. #1
    Junior Member
    Join Date
    Mar 2005
    San Diego

    GPU friendly data

    What's the best way to unroll this multiply-indexed vertex source data into a GPU friendly format (i.e. singly-indexed)? I could write some very expensive vertex comparison/uniqueness testing code and rebuild all the vertex attribute arrays to be the same size and singly indexed (GPU's want it this way, as do tri-strippers, and just about every other mesh processing algorithm I can think of...)

    Is there a way to get the exporter to spit it out this way from the get go?


    Max Elliott
    Max Elliott
    Sony Computer Entertainment America

  2. #2
    Junior Member
    Join Date
    Nov 2004
    I don't think the exporter is intended to generate data in different formats -- it's intended as a source format, not an export format, so it can't simplify or remove information, so you're outta luck there. However, it's not terribly expensive to do unique-ness testing if you use a hash table to find matching indices -- the algorithm is O(N). Unfortunately my code is not easily shareable because I use C++ in my parser, and there's some STL and template stuff going on (I have a template class for storing arrays, so I don't have to write the same code multiple times).

    I build a hash table that's the nearest power of 2 larger than the total number of polygon vertices (by polygon vertex, I mean the collection of indices for a single vertex), to guarantee the algorithm is O(N) regardless of the number of polygons. I hash the indices for a single vertex into a single value, do a modulus with the hash table size (can be a logical AND since it is a power of 2), then search a linked list of vertices at the given location in the hash table. I first compare that the hash value of a vertex is equal, if that proves equal, I then must compare the actual indices. If the indices are also equal, I set a pointer in the vertex to the "found" vertex marking it as a duplicate. Otherwise, I add the vertex to the hash table as a "unique" vertex, and set its "found" vertex pointer to NULL.

    When that's done, each vertex is marked as unique or duplicate, with the duplicate vertices having a pointer to their unique counterpart. At that point, it is trivial to traverse the polygon vertex array again, and assign a "unique" index to each vertex--when a new unique is found, the current unique index is incremented, otherwise duplicate vertices get the unique index from the vertex they are pointing to.

    The last question is what to do for a hash function. I'm particularly fond of this little hash function, which I've been using for ages:

    Code :
        unsigned n=0;  // Could use other initial value besides zero...
        for(;*str;str++) {
            n = n*131 + (*str);
        return n;

    I use it with the list of indices instead of string characters. Hope this helps!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts