NEON scalers


I've also noticed you don't use preload (pld instruction) at all, this gave a good improvement to my SDL blitters because Cortex-A8 doesn't have streaming detection (A9 does). Preloading around 2 cachelines ahead of time did a good job there. I tried to illustrate this on your code but strangely it had no positive effect even on test buffers way larger than L2 cache size. Strange.

Have you tried with hugeTLBs? plds that miss the TLB do nothing. But I wouldn't expect this to hit all loads unless somehow your access pattern is very irregular. Even at 800 16bpp width a DTLB entry is still 2.5 scanlines.


If you're only doing stores the pld won't really help. If you're interleaving loads and stores and both are constant L2 misses then it'll probably have to drain the store buffer while the load miss is serviced. It's possible that a cache miss causes plds to catch up too, although that'd be pretty poor..
 
This is really good stuff!


Would it be possible to add hq3x? I think that one would be best for GBA emulation, although Scale3x is very good too and faster.


http://en.wikipedia.org/wiki/Hqx


http://www.hiend3d.com/hq3x.html


http://code.google.com/p/hqx/source/browse/trunk/src/hq3x.c


It needs a lookup table (for 3x there are 256 cases depending on the 8 surrounding pixels), and it does anti-aliasing along the way (unlike Scale3x): each case determines how each of the 9 pixels are interpolated from the original pixel and its neighbors. Still it might be possible to implement the filter efficiently. It seems to be more complicated than Scale3x though, so I would totally understand if you don't feel like implementing it in NEON. Also, I'm not sure if there's a way to NEONize it effectively.
 
In hq2x/hq3x/hq4x, there's almost no place where neon instructions can be used (for parallel computation). The only thing I can think of is combining the writes, but that gives only very slight speed increase.


The same goes for other scalers - 2×SaI, Super 2×SaI, Super Eagle.
 
I haven't checked the code yet, might be fun to do a pass on it.

Finally, that 32bit palette with 16bit values is rather inconvenient to use, I assume you wanted to avoid address generation complexity and larger load latency?
I wanted to use only one instruction for loading value from palette:



Code:
ldr value, [palette, pixel, lsl #2]

With 16bit palette, more instructions are needed.


there's a trick I used a long time ago on GBA, you can actually read 16bit values as unaligned 32bit values:



Code:
ldr value, [palette, pixel, lsl #1]



its "wrong" but it works, I tried it out on Pandora, it still works:





Code:
sh@shpandy:~/tmp$ cat testalign.cpp

#include <stdio.h>

#include <stdlib.h>


int main()

{

volatile int data = 0x12345678;


printf("%x\n", *(int *)(((volatile char *)&data)+2) );


return 0;

}

sh@shpandy:~/tmp$ ./testalign

1234

sh@shpandy:~/tmp$ gdb -q testalign

Reading symbols from /home/sh/tmp/testalign...(no debugging symbols found)...done.

(gdb) disas main

Dump of assembler code for function main:

0x00008508 <main+0>: push {r11, lr}

0x0000850c <main+4>: add r11, sp, #4

0x00008510 <main+8>: sub sp, sp, #8

0x00008514 <main+12>: ldr r3, [pc, #44] ; 0x8548 <main+64>

0x00008518 <main+16>: str r3, [r11, #-8]

0x0000851c <main+20>: sub r3, r11, #8

0x00008520 <main+24>: add r3, r3, #2 <<----------- its really reading misaligned, gcc didnt fix it.

0x00008524 <main+28>: ldr r3, [r3]

0x00008528 <main+32>: ldr r0, [pc, #28] ; 0x854c <main+68>

0x0000852c <main+36>: mov r1, r3 <<---- gcc with -O0 is really awful 

0x00008530 <main+40>: bl 0x8438 <printf>

...
[/CODE]


on ARM7 TDMI you'd get the result "56781234" instead, but the lower 16 bits still end up with the correct value


you had to mask off the top somehow, which was fine if you did a strh later that discarded the "junk" data.


looks like on Cortex A8 we get the top bits cleared instead of rotated


you'll still get "junk" in the top part when reading word-aligned (even index) so if you can rework the code to ignore those top junk bits without penalty you'll get the convenience of a 16bit palette as 16bit values.


you can probably slip in one or two "free" instructions before something(s) that would otherwise stall (or idle in the 2nd issue pipe) since the Cortex A8 is a superscalar dual-issue microarchitecture.


what I don't know however is if this throws out a "data misaligned exception" on Pandora that the kernel/program ignores by default which would make everything crawl, you'll know right away if you try it out with an fps counter :)
 
ARMv7 doesn't support performing unaligned instructions like the older ARMs do, instead it just supports straight up unaligned accesses. Not on all instructions though.


However, on Cortex-A8 lsl #1 costs an extra cycle, so this trick is often not worth it :/
 
I am write-combining two palette values into one arm register and historically I needed the upper 16-bits to be zero, because I was using the orr instruction to combine the values (orr value1, value1, value2, lsl #16).


But I took a look at the source code and I realized I changed it to use the bfi instruction to combine the values (bfi value1, value2, #16, #16).


That means I don't need the upper 16-bits to be zero and I could use the ldr instruction to load a 16-bit palette value (instead of 32-bit palette value), but like Exophase said unaligned memory access costs an extra cycle.


That would mean, that on average every other palette access would be unaligned and cost an extra cycle.
 
The lsl #1 costs an extra cycle. I don't think there's a universal penalty for unaligned accesses, but an unaligned load gives you a massive 8 cycle penalty if it crosses a 16-byte boundary. That'd be one in eight palette entries, so I suppose the average penalty would be similar.


There's actually an instruction to combine two 16-bit halves (with optional shifting) called pkh. I don't know how it compares to bfi on Cortex-A8 because the TRM doesn't have cycle information for it, but I'd probably choose it first because bfi probably always uses the shifter and therefore may need extra latency on the source. If the TRM is right (hard to tell since it uses some weird names) pkh takes its source in E1, but also produces its result in E1, so it means you can dual issue it with instructions that take its result as a source, so long as they do so in E2. But if you're just storing it you might be able to dual issue that anyway since stores take their data source in E3.


As with anything like this I recommend trying different things and profiling it.


EDIT: This person seems to have done some testing: http://www.avison.me.../cortex-a8.html Apparently bfi takes its sources in E1 and E2 and produces its result in E2, which is about what I'd expect, since one of the sources needs shifting (normally done in E1). Also says PKH consumes source and produces dest in E2 if shifts aren't used, which is again what I'd normally expect (but is not what the TRM says).
 
Last edited by a moderator:
The lsl #1 costs an extra cycle. I don't think there's a universal penalty for unaligned accesses, but an unaligned load gives you a massive 8 cycle penalty if it crosses a 16-byte boundary. That'd be one in eight palette entries, so I suppose the average penalty would be similar.

not if you "misalign" the palette to start at +14.


at least for SNES, you'd only get 1 in 16 entries in most cases since entry 16, 32, ..., is rarely used.


but then you waste a whole cache line just to contain entry #0 and that one could be used A LOT.


and that's a lot of work just to pack the palette :D


the other question would be how much do we gain/lose by using a 512byte palette VS a 1024byte palette in terms of cache use.


doing a ldh and pkh might end up faster due to better cache use.


if you can interleave the write and reads you might get things to run faster:


(I'm assuming n_pixels is even)


{load pixel #0 #1 and palette look up}


{load pixel #2 #3}


for(i=4 to n_pixels, i+=2){


{interleave write of i-4, i-3, read of i+0, i+1, and palette look up of i-2, i-1}


}


{look up pixels-2, n_pixels-1}


{write of n_pixels-4, n_pixels-3}


{write of n_pixels-2, n_pixels-1}
 
Instead of speculating, why not read the code first ? :)


All palette work is done in file neon_normalxx.Sinc .


There is a macro _neon_normalxx_8_16_line_middle which reads 16 source pixels (8-bit), looks up the palette values for all of them and the result (16 16-bit pixels) is stored in two pairs of neon registers.

Code:
.macro _neon_normalxx_8_16_line_middle src, dst, pal, counter, reg1, reg2, reg3, reg4, reg5, reg6, reg7, reg8, reg9, dststride, dA, dB


ldr \reg1, [\src] @ reg1 = src[0-3]


ldr \reg2, [\src, #4] @ reg2 = src[4-7]


ldr \reg3, [\src, #8] @ reg3 = src[8-11]


ldr \reg4, [\src, #12] @ reg4 = src[12-15]

ubfx \reg5, \reg1, #0, #8 @ reg5 = src[0]


ldr \reg5, [\pal, \reg5, lsl #2] @ reg5 = pal[src[0]]

ubfx \reg6, \reg1, #8, #8 @ reg6 = src[1]


ldr \reg6, [\pal, \reg6, lsl #2] @ reg6 = pal[src[1]]

ubfx \reg7, \reg1, #16, #8 @ reg7 = src[2]


ldr \reg7, [\pal, \reg7, lsl #2] @ reg7 = pal[src[2]]

lsr \reg1, \reg1, #24 @ reg1 = src[3]


ldr \reg1, [\pal, \reg1, lsl #2] @ reg1 = pal[src[3]]

ubfx \reg8, \reg2, #0, #8 @ reg8 = src[4]


ldr \reg8, [\pal, \reg8, lsl #2] @ reg8 = pal[src[4]]

ubfx \reg9, \reg2, #8, #8 @ reg9 = src[5]


ldr \reg9, [\pal, \reg9, lsl #2] @ reg9 = pal[src[5]]

bfi \reg5, \reg6, #16, #16 @ reg5 = pal[src[0]] | pal[src[1]] << 16


bfi \reg7, \reg1, #16, #16 @ reg7 = pal[src[2]] | pal[src[3]] << 16

ubfx \reg6, \reg2, #16, #8 @ reg6 = src[6]


vmov d16, \reg5, \reg7 @ d16 = pal[src[0-3]]

lsr \reg2, \reg2, #24 @ reg2 = src[7]


ldr \reg6, [\pal, \reg6, lsl #2] @ reg6 = pal[src[6]]

bfi \reg8, \reg9, #16, #16 @ reg8 = pal[src[4]] | pal[src[5]] << 16


ldr \reg2, [\pal, \reg2, lsl #2] @ reg2 = pal[src[7]]

ubfx \reg1, \reg3, #0, #8 @ reg1 = src[8]


ldr \reg1, [\pal, \reg1, lsl #2] @ reg1 = pal[src[8]]

ubfx \reg5, \reg3, #8, #8 @ reg5 = src[9]


ldr \reg5, [\pal, \reg5, lsl #2] @ reg5 = pal[src[9]]

ubfx \reg7, \reg3, #16, #8 @ reg7 = src[10]


ldr \reg7, [\pal, \reg7, lsl #2] @ reg7 = pal[src[10]]

bfi \reg6, \reg2, #16, #16 @ reg6 = pal[src[6]] | pal[src[7]] << 16


vmov d17, \reg8, \reg6 @ d17 = pal[src[4-7]]

lsr \reg3, \reg3, #24 @ reg3 = src[11]


ldr \reg3, [\pal, \reg3, lsl #2] @ reg3 = pal[src[11]]

ubfx \reg2, \reg4, #0, #8 @ reg2 = src[12]


ldr \reg2, [\pal, \reg2, lsl #2] @ reg2 = pal[src[12]]

ubfx \reg6, \reg4, #8, #8 @ reg6 = src[13]


ldr \reg6, [\pal, \reg6, lsl #2] @ reg6 = pal[src[13]]

ubfx \reg8, \reg4, #16, #8 @ reg8 = src[14]


ldr \reg8, [\pal, \reg8, lsl #2] @ reg8 = pal[src[14]]

lsr \reg4, \reg4, #24 @ reg4 = src[15]


ldr \reg4, [\pal, \reg4, lsl #2] @ reg4 = pal[src[15]]

bfi \reg1, \reg5, #16, #16 @ reg1 = pal[src[8]] | pal[src[9]] << 16


bfi \reg7, \reg3, #16, #16 @ reg7 = pal[src[10]] | pal[src[11]] << 16

bfi \reg2, \reg6, #16, #16 @ reg2 = pal[src[12]] | pal[src[13]] << 16


vmov \dA, \reg1, \reg7 @ dA = pal[src[8-11]]

sub \counter, \counter, #16 @ counter -= 16


bfi \reg8, \reg4, #16, #16 @ reg8 = pal[src[14]] | pal[src[15]] << 16

add \src, \src, #16 @ src += 16


vmov \dB, \reg2, \reg8 @ dB = pal[src[12-15]]

cmp \counter, #16


.endm




The macro is used in a loop in macro neon_normal1x_8_16_line .

This macro converts one line (from 8-bit pixels to 16-bit pixels) - first it word aligns (4 bytes) the source (0-3 pixels), then it loops the first macro (16 pixels per iteration) and then it converts the remaining 0-15 pixels.




Code:
.macro neon_normal1x_8_16_line src, dst, pal, counter, reg1, reg2, reg3, reg4, reg5, reg6, reg7, reg8, reg9


@ align src to 4 bytes

andS \reg5, \src, #3 @ reg5 = src & 3

beq 10f


@ first 1-3 pixels

ldr \reg1, [\src] @ reg1 = src[0-3]

rsb \reg5, \reg5, #4 @ reg5 = 4 - (src & 3)


add \src, \src, \reg5 @ src += reg5

sub \counter, \counter, \reg5 @ counter -= reg5


subS \reg5, \reg5, #1 @ reg5--


ubfx \reg2, \reg1, #0, #8 @ reg2 = src[0]

ubfxne \reg3, \reg1, #8, #8 @ reg3 = src[1]


ldr \reg2, [\pal, \reg2, lsl #2] @ reg2 = pal[reg2]


ldrne \reg3, [\pal, \reg3, lsl #2] @ reg3 = pal[reg3]


strh \reg2, [\dst] @ dst[0] = reg2


strneh \reg3, [\dst, #2]! @ dst[1] = reg3; dst++

subneS \reg5, \reg5, #1 @ reg5--


ubfxne \reg4, \reg1, #16, #8 @ reg4 = src[2]

add \dst, \dst, #2 @ dst++


ldrne \reg4, [\pal, \reg4, lsl #2] @ reg4 = pal[reg4]


strneh \reg4, [\dst], #2 @ dst[2] = reg4; dst++


@ middle pixels (16 per iteration)

10:

_neon_normalxx_8_16_line_middle \src, \dst, \pal, \counter, \reg1, \reg2, \reg3, \reg4, \reg5, \reg6, \reg7, \reg8, \reg9, , d18, d19


vst1.16 {d16-d19}, [\dst]! @ dst[0-15] = d16-d19; dst += 2*16

bhs 10b


@ last 0-15 bytes


cmp \counter, #0

beq 40f


cmp \counter, #4

blo 30f


@ 4-12 pixels (4 pre iteration)

20:

ldr \reg1, [\src] @ reg1 = src[0-3]

sub \counter, \counter, #4 @ counter -= 4


add \src, \src, #4 @ src += 4

add \dst, \dst, #(2*4) @ dst += 4


ubfx \reg2, \reg1, #0, #8 @ reg2 = src[0]

cmp \counter, #4


ldr \reg2, [\pal, \reg2, lsl #2] @ reg2 = pal[src[0]]

ubfx \reg3, \reg1, #8, #8 @ reg3 = src[1]


ldr \reg3, [\pal, \reg3, lsl #2] @ reg3 = pal[src[1]]

ubfx \reg4, \reg1, #16, #8 @ reg4 = src[2]


ldr \reg4, [\pal, \reg4, lsl #2] @ reg4 = pal[src[2]]

lsr \reg1, \reg1, #24 @ reg1 = src[3]


ldr \reg1, [\pal, \reg1, lsl #2] @ reg1 = pal[src[3]]


strh \reg2, [\dst, #-8] @ dst[0] = reg2


strh \reg3, [\dst, #-6] @ dst[1] = reg3


strh \reg4, [\dst, #-4] @ dst[2] = reg4


strh \reg1, [\dst, #-2] @ dst[3] = reg1

bhs 20b


cmp \counter, #0

beq 40f


@ last 1-3 pixels

30:

ldrb \reg1, [\src] @ reg1 = src[0]

subS \counter, \counter, #1 @ counter--


ldrneb \reg2, [\src, #1]! @ reg2 = src[1]; src++


add \src, \src, #1 @ src++


ldr \reg1, [\pal, \reg1, lsl #2] @ reg1 = pal[src[0]]


ldrne \reg2, [\pal, \reg2, lsl #2] @ reg2 = pal[src[1]]


strh \reg1, [\dst] @ dst[0] = reg1


strneh \reg2, [\dst, #2]! @ dst[1] = reg2; dst++

subneS \counter, \counter, #1 @ counter--


ldrneb \reg3, [\src], #1 @ reg3 = src[2]; src++

add \dst, \dst, #2 @ dst++


ldrne \reg3, [\pal, \reg3, lsl #2] @ reg3 = pal[src[2]]


strneh \reg3, [\dst], #2 @ dst[2] = reg3; dst++


40:

.endm


The macro neon_normal1x_8_16_line is then used in various scalers.


P.S. The formatting looks better in the source code.
 
Last edited by a moderator:
code looks good.


I tried running some profiling test but my OpenPandora cross-build setup got borked when I reinstalled the OS on my laptop.


probably some lib, header or compiler version issue between what I'm compiling/linking against and what's actually on the pandora.


Since that's what I'm doing all week these days at work (investigating and fixing builds) its going to take a while before I feel like doing more of that stuff on the weekend :p
 
Fascinating topic! Stephane Hockenhull - maybe you'd get by with CDEVTOOLS?
 
Back
Top