Yes, I mentioned that already. Anyone who thinks this will run well on Wiz obviously hasn't tried it.Laurent said:Has Wiz been mentioned in this thread? If so, I have some bad news: the generated code is using some instructions that are not supported by the older processor in the Wiz.
I'm not surprisedAri64 said:I was surprised how much difference the movw/movt made, I guess the constant pool was causing a lot of cache misses. Maybe it's not surprising since it had to bring in a whole 64-byte cache line, but typically read only one or two constants.
The A8 supposedly has 4K, 64K, 1M, and 16M TLB entries. If it really used 4K entries for everything that would be bad.Laurent said:The problem with constant pools is multiple:
- the cache line has to be both in I-cache (since it's close to code) and D-cache (since you're loading)
- you have to allocate an I-TLB entry and a D-TLB one for these lines
Using movw/movt removes the need for D-cache and D-TLB entries, and given that a D-TLB entry maps only 4 KB, you really don't want to waste one entry.
BTW this makes me think about hugetlbfs suddenly IIRC ARM kernels don't support it...
Linux always uses 4KB pages on ARM... And also on x86, unless you use hugetlbfs.Ari64 said:The A8 supposedly has 4K, 64K, 1M, and 16M TLB entries. If it really used 4K entries for everything that would be bad.
The Combiner "compiler" in glN64 is using glTexEnv() commands. You might have them confused with blend modes? They're much more flexible than blend modes but your right that there probably isn't a one to one mapping between them and the n64 combiners. Still they're obviously close enough to be used in glN64 and rice_video. I have compiled glN64 with my wrapper without too much pain and i've got some 2D graphics to show.... I will see if i can get anything more over the weekend.Have you looked at glN64? I don't think it has any shader code, but it has a combiner compiler that should make it pretty straightforward to generate shaders with. I wouldn't consider a lack of frameskip that much of an immediate concern. It shouldn't be that difficult to add later.
Adventus said:The Combiner "compiler" in glN64 is using glTexEnv() commands. You might have them confused with blend modes? They're much more flexible than blend modes but your right that there probably isn't a one to one mapping between them and the n64 combiners. Still they're obviously close enough to be used in glN64 and rice_video. I have compiled glN64 with my wrapper without too much pain and i've got some 2D graphics to show.... I will see if i can get anything more over the weekend.Have you looked at glN64? I don't think it has any shader code, but it has a combiner compiler that should make it pretty straightforward to generate shaders with. I wouldn't consider a lack of frameskip that much of an immediate concern. It shouldn't be that difficult to add later.
Ahhh right. Sorry for the confusion, as usual i missed the point by a big margin. I guess i could gain performance since multipass combiners would become singlepass, but i would like to know how often these actually occur first....I'm definitely talking about the color combiner emulation. What glN64 does is converts the combiner operations to a different symbolic format, then performs algebraic transformations in order to try to simplify the one or two passes into something that are more likely to fit on a fixed function video card. Then it passes them to glTexEnv() or nVidia combiner extensions. The point I was getting at is that it wouldn't be that hard to take the output of the compiler and generate shaders with them, especially for you since you've already done things like this.
Adventus said:Ahhh right. Sorry for the confusion, as usual i missed the point by a big margin. I guess i could gain performance since multipass combiners would become singlepass, but i would like to know how often these actually occur first....I'm definitely talking about the color combiner emulation. What glN64 does is converts the combiner operations to a different symbolic format, then performs algebraic transformations in order to try to simplify the one or two passes into something that are more likely to fit on a fixed function video card. Then it passes them to glTexEnv() or nVidia combiner extensions. The point I was getting at is that it wouldn't be that hard to take the output of the compiler and generate shaders with them, especially for you since you've already done things like this.
I would assume the shader compiler is already doing those optimisations but i may be wrong. Obviously, I'll focus on getting it running first....It also performs substitions/reductions for things like zeros and ones.. really it's stuff you'd want to do no matter what, but it's already done so it should be pretty easy to generate a shader equation from it. I bet you could do it in a few hours, tops.
Adventus said:I would assume the shader compiler is already doing those optimisations but i may be wrong. Obviously, I'll focus on getting it running first....
Still trying to figure this out. 18 MIPS and 1 Mflops shouldn't be this hard. There don't seem to be a lot of L2 misses given how well it scales with increasing CPU clock frequency. There does seem to be some L1 i-cache pressure though. Compiling a smaller number of larger blocks helps somewhat, so that is what I'm looking into next.Exophase said:I wonder where all the rest of your CPU cycles are going at a max of 18m emulated per cycle? Maybe you'll find a good breakdown of it.
If it scales well with CPU frequency, why do you think L1 Icache pressure is high?Ari64 said:Still trying to figure this out. 18 MIPS and 1 Mflops shouldn't be this hard. There don't seem to be a lot of L2 misses given how well it scales with increasing CPU clock frequency. There does seem to be some L1 i-cache pressure though. Compiling a smaller number of larger blocks helps somewhat, so that is what I'm looking into next.
Laurent said:If it scales well with CPU frequency, why do you think L1 Icache pressure is high?Ari64 said:Still trying to figure this out. 18 MIPS and 1 Mflops shouldn't be this hard. There don't seem to be a lot of L2 misses given how well it scales with increasing CPU clock frequency. There does seem to be some L1 i-cache pressure though. Compiling a smaller number of larger blocks helps somewhat, so that is what I'm looking into next.
The L2 cache runs at a fixed ratio to the CPU clock, so when you increase the core clock the L2 speeds up too. Also the PLD instructions aren't helping much. (The x86 PREFETCH is a lot more effective since it fetches to L1. PLD on the A8 only fetches to L2, so it doesn't help where the L2 miss rate is low.)Exophase said:Probably because L2->L1 linefills performance scales with CPU frequency. Although, L2 miss time would have a component that scales with CPU clock too. Not as big as the DDR latency component, though.
Ari64, interested in trying oprofile for us? I might try it on x86 Linux, but I don't yet know how to make it only turn on/report performance counters and not do other profiling.
I still don't understand why if the program scales with frequency, one can deduce Icache pressure is an issue. If that was the case, it shouldn't scale as your Icache loads would be a bottleneckExophase said:Laurent said:If it scales well with CPU frequency, why do you think L1 Icache pressure is high?Ari64 said:Still trying to figure this out. 18 MIPS and 1 Mflops shouldn't be this hard. There don't seem to be a lot of L2 misses given how well it scales with increasing CPU clock frequency. There does seem to be some L1 i-cache pressure though. Compiling a smaller number of larger blocks helps somewhat, so that is what I'm looking into next.
Probably because L2->L1 linefills performance scales with CPU frequency. Although, L2 miss time would have a component that scales with CPU clock too. Not as big as the DDR latency component, though.
Laurent said:I still don't understand why if the program scales with frequency, one can deduce Icache pressure is an issue. If that was the case, it shouldn't scale as your Icache loads would be a bottleneck
Ha, in fact I somehow did not see how you had come to that conclusion in your original post. So we agreeAri64 said:The reason I suspect i-cache pressure is that reducing the number of instructions improves performance.
That's odd: Mans Rullgard who writes the NEON code for FFmpeg said PLD increased performance (you can take a look here git). You should probably schedule these preloads well in advance.In fact, removing the PLD instructions actually increased performance, which was unexpected and the opposite of what happens on other CPUs.
(wild guess mode) OMAP3 was probably originally designed for smartphones (given that it was done by TI Wireless business unit as OMAP34xx) so their decisions made more sense. Of course, now even for smartphones you need more L1 cache; OMAP4 will have 32 KB + 32 KB L1 and 1 MB L2.The Cortex-A8 can be configured with either 16K+16K or 32K+32K of L1 cache. TI did the former, which seems like a very poor design decision.