Laurent said:
I fully agree, and I think it's the reason why emulators should move to multicore. Of course there will always be one core that will be the hot spot (the one simulating the CPU), but if you can move some tasks to other cores (sound, graphics, etc.) and have low overhead synchronization it's a win.
Problem being, of course, that a lot of the synchronization overhead is shouldered be the OS and the hardware itself.
What's needed all depends on the synchronization demands of what's being emulated, which is of course a fuzzy number since it's really questioning how sensitive the software is to timing inaccuracy. For PCSX2 they have determined the switching between the CPU cores (R5900, VUs in micro-mode, IOP) happen every 512 cycles. I think this is still too fine grained for threads in different CPUs to synchronize at w/o introducing more overhead than the emulation of the block itself plus the single core task switch overhead. This might not strictly be the case; spinning on a timestamp counter should be fast enough, although it's still putting out loads to whatever the shared cache is, so probably a few hundred clock cycles. But the single core switching overhead is going to be quite high too, since it'll have to switch register sets. DS games also require tight synchronization between their two CPUs to even boot. It's within several dozen bus cycles.
Graphics can benefit more, but a lot of the work for graphics is already offloaded to GPUs. In some platforms like DS there's still per-line synchronization necessary that games really are still depending on. Since video is usually a write-only system this can be accomplished with queues instead of lockstep synchronization, but it can still get tricky - especially when dealing with large shared memories.
What I think would be really nice for emulation is a little more in the way of hardware assistance in CPUs. ARM would be especially smart to capitalize on this in order to accelerate x86 emulation; Longsoon, for instance, has started down this path. But I don't really mean instructions that do what x86 does and ARM doesn't, I mean some more general purpose functions. For instance, hardware hash tables (ie, software defined TLBs), better access to flags (on x86 anyway), things like this.
On the synchronization front, one thing that would help is assistance for something like what Transmeta Crusoe did. Accesses to shared state, ie loads and stores, went to an intermediate buffer, and coherency was checked inbetween blocks. If the block had to be interrupted then the results could be flushed and it could be emulated interpretively; otherwise the buffers can write back out to real state. This is kind of a higher level extension of what pipelines already do, for handling cache misses and branch mispredictions. To accommodate emulation of multiple sources memory accesses could be logged and stores queued, and the logs can be compared against the stores from several sources in order to determine synchronization issues. Unfortunately, fairly large buffers (at least dozens if not hundreds of KB) would be necessary before this to work out to much of a win, and then synchronization errors would be a bigger problem.