Neon optimization


ptitSeb

Serial Porter
Joined
Aug 15, 2012
Messages
9,330
Age
51
Location
France, near Lyon
In the port of Jedi Knight II, I'm trying to get a few more fps, and converting some of the vector math code to Neon. The vector code if this engine is based on float[3], not 4, and because it's deep inside the engine, I cannot really change that without rewriting a large part of the game, wich I don't want to do.

I used code from here:https://code.google.com/p/math-neon/n abd tried to addapt it, but it just don't work s and I fail to see why


#ifdef NEON
inline vec_t DotProduct( const vec3_t v1, const vec3_t v2 ) {
        asm volatile (
        "vld1.32                {d2}, [%0]                      \n\t"   //d2={x0,y0}
        "flds                   s6, [%0, #8]            \n\t"   //d3[0]={z0}
        "vld1.32                {d4}, [%1]                      \n\t"   //d4={x1,y1}
        "flds                   s10, [%1, #8]   \n\t"   //d5[0]={z1}
 
        "vmul.f32               d0, d2, d4                      \n\t"   //d0= d2*d4
        "vpadd.f32              d0, d0, d0                      \n\t"   //d0 = d[0] + d[1]
        "vmla.f32               d0, d3, d5                      \n\t"   //d0 = d0 + d3*d5 
        "vmov.f32 r0, s0   \n\t" // softfp
        :: "r"(v1), "r"(v2) 
    : "d0","d1","d2","d3","d4","d5"
        );      
 
}
#else
inline vec_t DotProduct( const vec3_t v1, const vec3_t v2 ) {
return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2];
}
#endif

vec_t is float, and vec3_t is float[3]

I guess that with softfp you cannot just return a float using r0, but I haven't see how to return value other than using r0 :( . gcc inline assembler is new for me.

Any advice?
 
Last edited by a moderator:
How is vec_t defined?

You can't assume that if you put a value to r0 it will become a return value, because compiler is free to insert extra code before and after your asm volatile statement that uses any register (for prologue/epilogue/stack setup/whatever).

You should do something like this:

Code:
inline vec_t DotProduct( const vec3_t v1, const vec3_t v2 ) {
    vec_t ret;
    asm volatile (
    "vld1.32 {d2}, [%1]"
        ....
    "vmov.f32  %0, s0"
    : "=r"(ret)
    : "r"(v1), "r"(v2)
    : "d0","d1","d2","d3","d4","d5"
    );
    return ret;
}
Note that %0 and %1 becomes %1 and %2 because of ret.
 
Last edited by a moderator:
How is vec_t defined?


You can't assume that if you put a value to r0 it will become a return value, because compiler is free to insert extra code before and after your asm volatile statement that uses any register (for prologue/epilogue/stack setup/whatever).


You should do something like this:


inline vec_t DotProduct( const vec3_t v1, const vec3_t v2 ) {
    vec_t ret;
    asm volatile (
"vld1.32 {d2}, [%1]"
        ....
    "vmov.f32  %0, s0"
    : "=r"(ret)
    : "r"(v1), "r"(v2)
    : "d0","d1","d2","d3","d4","d5"
    );
    return ret;
}
Note that %0 and %1 becomes %1 and %2 because of ret.
OOOh, %0 is not v1 anymore if I add a ret argument ! That explain why I didn't work :(

Thanks a lot, I'll try that (that gcc syntax is still a foreign language for me).
 
I think even if you get this working it won't be that much faster than the original. To really get much of a speedup you need to be able to do a batch of several dot products.
 
I think even if you get this working it won't be that much faster than the original. To really get much of a speedup you need to be able to do a batch of several dot products.
I could definitely use something like that for texture generation in libGL.
 
In the loops, there is rarely more the 3 dotproduct in a row. So maybe I have to look at some "mass dotproduct" of arrays, inside loops. Hum, that may be possible in some places, I have to see. Once I have the single one working, I look at potential client for that.
 
Three in a row is a good start. You could probably do that in about the same speed you're currently doing one.
 
For example, in this function, there are many "group of 3", so I probably can try to neonize the group...

void R_TransformEachSurface( const mdxmSurface_t *surface, vec3_t scale, CMiniHeap *G2VertSpace, int *TransformedVertsArray,CBoneCache *boneCache)
{
    int                 j, k;
    mdxmVertex_t     *v;
    float            *TransformedVerts;

    //
    // deform the vertexes by the lerped bones
    //
    int *piBoneReferences = (int*) ((byte*)surface + surface->ofsBoneReferences);
   
    // alloc some space for the transformed verts to get put in
    TransformedVerts = (float *)G2VertSpace->MiniHeapAlloc(surface->numVerts * 5 * 4);
    TransformedVertsArray[surface->thisSurfaceIndex] = (int)TransformedVerts;
    if (!TransformedVerts)
    {
        Com_Error(ERR_DROP, "Ran out of transform space for Ghoul2 Models. Adjust MiniHeapSize in SV_SpawnServer.\n");
    }

    // whip through and actually transform each vertex
    const int numVerts = surface->numVerts;
    v = (mdxmVertex_t *) ((byte *)surface + surface->ofsVerts);
    mdxmVertexTexCoord_t *pTexCoords = (mdxmVertexTexCoord_t *) &v[numVerts];

    // optimisation issue
    if ((scale[0] != 1.0f) || (scale[1] != 1.0f) || (scale[2] != 1.0f))
    {
        for ( j = 0; j < numVerts; j++ )
        {
            vec3_t            tempVert, tempNormal;
//            mdxmWeight_t    *w;

            VectorClear( tempVert );
            VectorClear( tempNormal );
//            w = v->weights;

            const int iNumWeights = G2_GetVertWeights( v );

            float fTotalWeight = 0.0f;
            for ( k = 0 ; k < iNumWeights ; k++ )
            {
                int        iBoneIndex    = G2_GetVertBoneIndex( v, k );
                float    fBoneWeight    = G2_GetVertBoneWeight( v, k, fTotalWeight, iNumWeights );

                const mdxaBone_t &bone=EvalBoneCache(piBoneReferences[iBoneIndex],boneCache);

                tempVert[0] += fBoneWeight * ( DotProduct( bone.matrix[0], v->vertCoords ) + bone.matrix[0][3] );
                tempVert[1] += fBoneWeight * ( DotProduct( bone.matrix[1], v->vertCoords ) + bone.matrix[1][3] );
                tempVert[2] += fBoneWeight * ( DotProduct( bone.matrix[2], v->vertCoords ) + bone.matrix[2][3] );

                tempNormal[0] += fBoneWeight * DotProduct( bone.matrix[0], v->normal );
                tempNormal[1] += fBoneWeight * DotProduct( bone.matrix[1], v->normal );
                tempNormal[2] += fBoneWeight * DotProduct( bone.matrix[2], v->normal );
            }
            int pos = j * 5;

            // copy tranformed verts into temp space
            TransformedVerts[pos++] = tempVert[0] * scale[0];
            TransformedVerts[pos++] = tempVert[1] * scale[1];
            TransformedVerts[pos++] = tempVert[2] * scale[2];
            // we will need the S & T coors too for hitlocation and hitmaterial stuff
            TransformedVerts[pos++] = pTexCoords[j].texCoords[0];
            TransformedVerts[pos] = pTexCoords[j].texCoords[1];

            v++;// = (mdxmVertex_t *)&v->weights[/*v->numWeights*/surface->maxVertBoneWeights];
        }
    }
    else
    {
        int pos = 0;
          for ( j = 0; j < numVerts; j++ )
        {
            vec3_t            tempVert, tempNormal;
//            const mdxmWeight_t    *w;

            VectorClear( tempVert );
            VectorClear( tempNormal );
//            w = v->weights;

            const int iNumWeights = G2_GetVertWeights( v );

            float fTotalWeight = 0.0f;
            for ( k = 0 ; k < iNumWeights ; k++ )
            {
                int        iBoneIndex    = G2_GetVertBoneIndex( v, k );
                float    fBoneWeight    = G2_GetVertBoneWeight( v, k, fTotalWeight, iNumWeights );

                const mdxaBone_t &bone=EvalBoneCache(piBoneReferences[iBoneIndex],boneCache);

                tempVert[0] += fBoneWeight * ( DotProduct( bone.matrix[0], v->vertCoords ) + bone.matrix[0][3] );
                tempVert[1] += fBoneWeight * ( DotProduct( bone.matrix[1], v->vertCoords ) + bone.matrix[1][3] );
                tempVert[2] += fBoneWeight * ( DotProduct( bone.matrix[2], v->vertCoords ) + bone.matrix[2][3] );

                tempNormal[0] += fBoneWeight * DotProduct( bone.matrix[0], v->normal );
                tempNormal[1] += fBoneWeight * DotProduct( bone.matrix[1], v->normal );
                tempNormal[2] += fBoneWeight * DotProduct( bone.matrix[2], v->normal );
            }

            // copy tranformed verts into temp space
            TransformedVerts[pos++] = tempVert[0];
            TransformedVerts[pos++] = tempVert[1];
            TransformedVerts[pos++] = tempVert[2];
            // we will need the S & T coors too for hitlocation and hitmaterial stuff
            TransformedVerts[pos++] = pTexCoords[j].texCoords[0];
            TransformedVerts[pos++] = pTexCoords[j].texCoords[1];

            v++;// = (mdxmVertex_t *)&v->weights[/*v->numWeights*/surface->maxVertBoneWeights];
        }
    }
}

I may be abble to do things using Intrinsec or asm volatil here.

It's a function used for caracters animation, so it's a good point to optimize.
 
Last edited by a moderator:
IMO you should make a function that replaces the entire inner part of that loop. Do all six dot products and multiplications, and 9 additions. You'd completely avoid those 20+ cycle NEON to scalar register penalties, for one thing. Basically make this function:

Code:
void R_TransformBoneWeight(vec_3t tempVert, vec_3t tempNormal, vecvec_3t float fBoneWeight,
 vec_3t vertCoords, vec_3t normal, vec_3t boneMatrix[3])
{
    tempVert[0] += fBoneWeight * ( DotProduct( boneMatrix[0], vertCoords ) + boneMatrix[0][3] );
    tempVert[1] += fBoneWeight * ( DotProduct( boneMatrix[1], vertCoords ) + boneMatrix[1][3] );
    tempVert[2] += fBoneWeight * ( DotProduct( boneMatrix[2], vertCoords ) + boneMatrix[2][3] );

    tempNormal[0] += fBoneWeight * DotProduct( boneMatrix[0], normal );
    tempNormal[1] += fBoneWeight * DotProduct( boneMatrix[1], normal );
    tempNormal[2] += fBoneWeight * DotProduct( boneMatrix[2], normal );
}
First expand this into all the scalar operations:

Code:
void R_TransformBoneWeight(vec_3t tempVert, vec_3t tempNormal, vecvec_3t float fBoneWeight,
 vec_3t vertCoords, vec_3t normal, vec_3t boneMatrix[3])
{
    float dp0, dp1, dp2, dp3, dp4, dp5;

    dp0 = boneMatrix[0][0] * vertCoords[0];
    dp0 += boneMatrix[0][1] * vertCoords[1];
    dp0 += boneMatrix[0][2] * vertCoords[2];
    dp0 *= fBoneWeight;
    dp0 += boneMatrix[0][3];
    tempVert[0] += dp0;

    dp1 = boneMatrix[1][0] * vertCoords[0];
    dp1 += boneMatrix[1][1] * vertCoords[1];
    dp1 += boneMatrix[1][2] * vertCoords[2];
    dp1 *= fBoneWeight;
    dp1 += boneMatrix[1][3];
    tempVert[1] += dp1;

    dp2 = boneMatrix[2][0] * vertCoords[0];
    dp2 += boneMatrix[2][1] * vertCoords[1];
    dp2 += boneMatrix[2][2] * vertCoords[2];
    dp2 *= fBoneWeight;
    dp2 += boneMatrix[2][3];
    tempVert[2] += dp2;

    dp3 = boneMatrix[0][0] * normal[0];
    dp3 += boneMatrix[0][1] * normal[1];
    dp3 += boneMatrix[0][2] * normal[2];
    dp3 *= fBoneWeight;
    tempNormal[0] += dp3;

    dp4 = boneMatrix[1][0] * normal[0];
    dp4 += boneMatrix[1][1] * normal[1];
    dp4 += boneMatrix[1][2] * normal[2];
    dp4 *= fBoneWeight;
    tempNormal[1] += dp4;

    dp5 = boneMatrix[2][0] * normal[0];
    dp5 += boneMatrix[2][1] * normal[1];
    dp5 += boneMatrix[2][2] * normal[2];
    dp5 *= fBoneWeight;
    tempNormal[2] += dp5;
}
Now interleave the operations.

Code:
void R_TransformBoneWeight(vec_3t tempVert, vec_3t tempNormal, vecvec_3t float fBoneWeight,
 vec_3t vertCoords, vec_3t normal, vec_3t boneMatrix[3])
{
    float dp0, dp1, dp2;

    dp0 = boneMatrix[0][0] * vertCoords[0];
    dp3 = boneMatrix[0][0] * normal[0];
    dp1 = boneMatrix[1][0] * vertCoords[0];
    dp4 = boneMatrix[1][0] * normal[0];
    dp2 = boneMatrix[2][0] * vertCoords[0];
    dp5 = boneMatrix[2][0] * normal[0];

    dp0 += boneMatrix[0][1] * vertCoords[1];
    dp3 += boneMatrix[0][1] * normal[1];
    dp1 += boneMatrix[1][1] * vertCoords[1];
    dp4 += boneMatrix[1][1] * normal[1];
    dp2 += boneMatrix[2][1] * vertCoords[1];
    dp5 += boneMatrix[2][1] * normal[1];

    dp0 += boneMatrix[0][2] * vertCoords[2];
    dp3 += boneMatrix[0][2] * normal[2];
    dp1 += boneMatrix[1][2] * vertCoords[2];
    dp4 += boneMatrix[1][2] * normal[2];
    dp2 += boneMatrix[2][2] * vertCoords[2];
    dp5 += boneMatrix[2][2] * normal[2];

    dp0 *= fBoneWeight;
    dp3 *= fBoneWeight;
    dp1 *= fBoneWeight;
    dp4 *= fBoneWeight;
    dp2 *= fBoneWeight;
    dp5 *= fBoneWeight;

    dp0 += boneMatrix[0][3];
    dp1 += boneMatrix[1][3];
    dp2 += boneMatrix[2][3];
    tempNormal[0] += dp3;
    tempNormal[1] += dp4;
    tempNormal[2] += dp5;

    tempVert[0] += dp0;
    tempVert[1] += dp1;
    tempVert[2] += dp2;
}
And vectorize, noting that NEON has support for scalar * vector multiplication and instructions for interleaving variables:

Code:
void R_TransformBoneWeight(vec_3t tempVert, vec_3t tempNormal, vecvec_3t float fBoneWeight,
 vec_3t vertCoords, vec_3t normal, vec_3t boneMatrix[3])
{
    float_x2 dp03;
    float_x2 dp14;
    float_x2 dp25;
    float_x2 vertNorm_0 = { vertCoords[0], normal[0] };
    float_x2 vertNorm_1 = { vertCoords[1], normal[1] };
    float_x2 vertNorm_2 = { vertCoords[2], normal[2] };
    float_x2 boneNorm_0 = { boneMatrix[0][3], tempNormal[0] };
    float_x2 boneNorm_1 = { boneMatrix[1][3], tempNormal[1] };
    float_x2 boneNorm_2 = { boneMatrix[2][3], tempNormal[2] };

    dp03 = boneMatrix[0][0] * vertNorm_0;
    dp14 = boneMatrix[1][0] * vertNorm_0;
    dp25 = boneMatrix[2][0] * vertNorm_0;

    dp03 += boneMatrix[0][1] * vertNorm_1;
    dp14 += boneMatrix[1][1] * vertNorm_1;
    dp25 += boneMatrix[2][1] * vertNorm_1;

    dp03 += boneMatrix[0][2] * vertNorm_2;
    dp14 += boneMatrix[1][2] * vertNorm_2;
    dp25 += boneMatrix[2][2] * vertNorm_2;

    dp03 *= fBoneWeight;
    dp14 *= fBoneWeight;
    dp25 *= fBoneWeight;

    dp03 += boneNorm_0;
    dp14 += boneNorm_1;
    dp25 += boneNorm_2;

    dp01, dp23 = deinterleave(dp03, dp14 }
    dp12, dp45 = deinterlaeve(dp14, dp25 }

    dp01 += tempVert_01;
    dp2[0] += tempVert[2];

    tempVert_01 = dp01;
    tempVert[2] = dp23[0];

    tempNormal[0] = dp23[1];
    tempNormal_12 = dp45;
}
This is just a rough idea, not really carefully thought through and certainly not tested to be right but maybe it'll give you some ideas. Unfortunately this level of parallelism still isn't enough to cover the large latency of those floating point multiply + accumulate instructions..
 
Last edited by a moderator:
@Exophase your last post is interesting, I would find interesting to see a full work through example (similar) to the above of something you have solved with NEON optimisation (like if in your DS emulator, the geometry engine, in the C implementation, it showed a slow down whilst lighting normals, so you took the original code X, reworked it to Y, then interleaved to Z, then made use of NEON/CPU instructions to get to W, which you then profiled to show a bottleneck with L2 cache so ended up with T). I have a feeling others would also be interested in this; if you ever fancied writing anything like this (whether it was for pandoralive, or a forum post, or something on your website) it would be nice to read.
 
Yes, that interleaving concept is interesting, and I clearly don't think like that as a first approach.

On this subject, I have made some progress, so now, at least, my code doesn't seems to crash anymore. I'll copy/paste my code here, for review. I'll start with simple things first, I haven't done the big one above yet (as the simple one where not working).
 
@Exophase your last post is interesting, I would find interesting to see a full work through example (similar) to the above of something you have solved with NEON optimisation (like if in your DS emulator, the geometry engine, in the C implementation, it showed a slow down whilst lighting normals, so you took the original code X, reworked it to Y, then interleaved to Z, then made use of NEON/CPU instructions to get to W, which you then profiled to show a bottleneck with L2 cache so ended up with T). I have a feeling others would also be interested in this; if you ever fancied writing anything like this (whether it was for pandoralive, or a forum post, or something on your website) it would be nice to read.
I could maybe try something like that sometime, but the thing I want to stress most is that the biggest part of the work is often reworking the algorithms so that things can happen in big batches, rather than converting the functions to NEON code.
 
@Exophase your last post is interesting, I would find interesting to see a full work through example (similar) to the above of something you have solved with NEON optimisation (like if in your DS emulator, the geometry engine, in the C implementation, it showed a slow down whilst lighting normals, so you took the original code X, reworked it to Y, then interleaved to Z, then made use of NEON/CPU instructions to get to W, which you then profiled to show a bottleneck with L2 cache so ended up with T). I have a feeling others would also be interested in this; if you ever fancied writing anything like this (whether it was for pandoralive, or a forum post, or something on your website) it would be nice to read.
Here's an example of some NEON code from notaz' SDL, it could help to get you started: (in this case it is code to blit surfaces with alpha-blending, 8 pixels at a time)

http://notaz.gp2x.de/cgi-bin/gitweb.cgi?p=sdl_omap.git;a=blob;f=src/video/SDL_blit_neon.S;h=979bb2a0837e34572df07c3fa1937569d6ceaf08;hb=HEAD
 
Here is a first try of inline asm neon code. It's just a 1/sqrtf. I will not gain anything with that, but it's simple enough to start learning.


inline float Q_rsqrt( float f ) {
 float ret;
 asm volatile (
    "vmov.32        s0, %1        \n\t"
    "vdup.32        d0, d0[0]    \n\t"
    "vmov.64        d1, d0        \n\t"
    "vrsqrte.f32    d0, d0        \n\t"
    "vmul.f32        d2, d0, d0    \n\t"
    "vrsqrts.f32    d1, d1, d2    \n\t"
    "vmul.f32        d0, d0, d1    \n\t"
    
    "vmov.32        %0, s0        \n\t"
    :"+&r" (ret), "+&r" (f)
    :
    :"d0", "d1", "d2"
 );
 return ret;
}
 

I had some difficulties understanding the "input" and "output" parameters. The output list is with the "+&r" (ret), you said that "ret", that will be accessed by "%0" in the asm code, will be read & write ("+"), must be a unique register ("&"), and is a register ("r"). Where the input list is empty here. I could have put "f" to be cleaner.

The clobber list "d0", "d1", "d2" list everything that will be touched in the proc. Should I put flags too? And "memory" also maybe?

Another point I struggle with was, how a float is passed? Well, the "f" value is passed as a register (not address of). so I used a vmov to copy from general register Rx to Single precision register "Sx".

The "vdup" could probably be removed, I don't use it.

Any advices, some things not correct in this?
 
Last edited by a moderator:
Another one, is the Multiply by Scalar and Add of a vector.

Knowing the

typedef vec3_t float[3];


I have:

#ifdef NEON
inline void VectorMA( const vec3_t veca, float scale, const vec3_t vecb, vec3_t vecc) {
        asm volatile (
        "vld1.32                {d0}, [%0]                  \n\t"   //d0={x0,y0}
        "flds                   s2, [%0, #8]                        \n\t"   //d1[0]={z0}
        "vld1.32                {d2}, [%2]                      \n\t"   //d2={x1,y1}
        "flds                   s6, [%2, #8]                       \n\t"   //d3[0]={z1}
        "vmov.32                s8, %1                            \n\t"
        "vdup.f32                d4, d4[0]                        \n\t"    //d4=scale
        
        "vmla.f32                d0, d2, d4                        \n\t"
        "vmla.f32                d1, d3, d4                        \n\t"
        "vst1.32                d0, [%3]                        \n\t"
        "fsts                   s2, [%3, #8]                       \n\t"   //
        : "+&r"(veca), "+&r"(scale), "+&r"(vecb), "+&r" (vecc):
        : "d0", "d1", "d2", "d3", "d4", "memory"        
        );
#else
inline void VectorMA( const vec3_t veca, float scale, const vec3_t vecb, vec3_t vecc) {
    vecc[0] = veca[0] + scale*vecb[0];
    vecc[1] = veca[1] + scale*vecb[1];
    vecc[2] = veca[2] + scale*vecb[2];
#endif
}


Not sure I get a lot of perf of that, as Exophase point out, you have to rethink loops in the code to get benefits of Neon...

I've also a (finally) working DotProduct:

#ifdef NEON
inline vec_t DotProduct( vec3_t v1, vec3_t v2 ) {
vec_t ret;
        asm volatile (
        "vld1.32                {d2}, [%1]                      \n\t"   //d2={x0,y0}
        "flds                   s6, [%1, #8]                    \n\t"   //d3[0]={z0}
        "vld1.32                {d4}, [%2]                      \n\t"   //d4={x1,y1}
        "flds                   s10, [%2, #8]                   \n\t"   //d5[0]={z1}

        "vmul.f32               d0, d2, d4                      \n\t"   //d0= d2*d4
        "vpadd.f32              d0, d0, d0                      \n\t"   //d0 = d[0] + d[1]
        "vmla.f32               d0, d3, d5                      \n\t"   //d0 = d0 + d3*d5
        "vmov.f32                %0, s0                            \n\t"
//        "fsts                    s0, [%0]                        \n\t"
        : [ret] "+&r" (ret), "+&r"(v1), "+&r"(v2) :
        : "d0","d1","d2","d3","d4","d5", "memory"
        );      
        return ret;
    
}
#else
inline vec_t DotProduct( const vec3_t v1, const vec3_t v2 ) {
    return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2];
}
#endif


It's unclean code, but it works. Not sure it's optimal. it's probably not.
 
Some follow up. I did some bench of the procedure. Wrote a very simple loop (yes, gcc will optimze that loop probably), filling a large array of vec3_t (100000 elements), and doing VectorMA, DotProduct, CrossProduct and VectoreNormalizeFast.

Without the neon assembly, I get:


Chrono VectorMA           ... 0.182495 ms
Chrono VectorNormalizeFast... 0.414642 ms
Chrono DotProduct         ... 0.155915 ms
Chrono CrossProduct       ... 0.167572 ms


With neon inline assembly, I get:


Chrono VectorMA           ... 0.109833 ms
Chrono VectorNormalizeFast... 0.224914 ms
Chrono DotProduct         ... 0.183228 ms
Chrono CrossProduct       ... 0.101227 ms


conclusion: I keep all but the DotProduct assembly version !
 
Last edited by a moderator:
last follow up before I go to bed.

I actualy follow the idea of Exophase, and create the function. I used Intrinsic, as the syntax is much easier, and very close to what Exophase gave.

Here is the working function:


void R_TransformBoneWeight(vec3_t tempVert, vec3_t tempNormal, float fBoneWeight,
 vec3_t vertCoords, vec3_t normal, float boneMatrix[3][4])
{
float32x2_t dp03;
    float32x2_t dp14;
    float32x2_t dp25;
    float32x2_t vertNorm_0 = { vertCoords[0], normal[0] };
    float32x2_t vertNorm_1 = { vertCoords[1], normal[1] };
    float32x2_t vertNorm_2 = { vertCoords[2], normal[2] };
    float32x2_t boneNorm_0 = { boneMatrix[0][3], tempNormal[0] };
    float32x2_t boneNorm_1 = { boneMatrix[1][3], tempNormal[1] };
    float32x2_t boneNorm_2 = { boneMatrix[2][3], tempNormal[2] };
 
    dp03 = boneMatrix[0][0] * vertNorm_0;
    dp14 = boneMatrix[1][0] * vertNorm_0;
    dp25 = boneMatrix[2][0] * vertNorm_0;
 
    dp03 += boneMatrix[0][1] * vertNorm_1;
    dp14 += boneMatrix[1][1] * vertNorm_1;
    dp25 += boneMatrix[2][1] * vertNorm_1;
 
    dp03 += boneMatrix[0][2] * vertNorm_2;
    dp14 += boneMatrix[1][2] * vertNorm_2;
    dp25 += boneMatrix[2][2] * vertNorm_2;
 
    dp03 *= fBoneWeight;
    dp14 *= fBoneWeight;
    dp25 *= fBoneWeight;
 
    dp03 += boneNorm_0;
    dp14 += boneNorm_1;
    dp25 += boneNorm_2;
    
    float32x2_t dp01 = {dp03[0], dp14[0]};
    float32x2_t dp45 = {dp14[1], dp25[1]};
    
dp01 += vld1_f32(tempVert);
    dp25[0] += tempVert[2];
 
    vst1_f32(tempVert, dp01);
    tempVert[2] = dp25[0];
 
    tempNormal[0] = dp03[1];
    vst1_f32(tempNormal+1, dp45);
}


Than I added a quick test to ensure calculation looks good (they do, but I must admit my test case is not really good), and I added it to the bench suite.

Results, without NEON is:


Chrono Transf. Bone Weight... 0.345337 ms


With NEON, it's:


Chrono Transf. Bone Weight... 0.185058 ms

So, almost 2* faster !! :eek:

Tomorrow, I'll plug that inside Jedi Knight, it may help when many guys are shooting at you :D !
 
Last edited by a moderator:
I had to try before going to bed... It works. It *seems* to make things less slow when many guy are active on screen. Things get under 15fps when more the 4 to 5 guys are active. It slow a bit less faster it seems.

I think I'll stop to try to skeeze more fps of it soon. I'll probably release it (Jedi Knight II) this week end.
 
@ptibSeb very interesting, it is nice to see some positive results/wins - it is great when theory translates into practise.

How 'good' is the current GCC we are using at handling intrinsics? I know in theory intrinsics should be preferable over inlined assembly, as you give the compiler more freedom in terms of register loads/stores and room for other optimizations. I don't know if notaz/Expophase have any comparison of inlined ASM vs intrinsics from real world, seeing how theory translates into practise.
 
Back
Top