Emulator Development Talk


DaveC said:
paulguy said:
I like to play with 4:3 (or otherwise original) aspect ratio, but hate the oddly shaped irregular pixels you get from scaling to that from 8:7, so that's why I use stuff like bilinear filter. The irregular pixels are REALLY noticeable and bothersome when watching dither patterns scroll around. I tend not to use stuff like 2xsai/eagle/hq/ntsc filter, though.

--- Examples ---

Slight blur around the edges, but otherwise sharp pixels, irregularity is not as noticable (Little overall change in brightness between pixels). This would be my preferred mode but no emulator supports this as I guess it's rather expensive (As described to me.), since the image needs to be rendered to a texture first, then rendered to screen.

If the last image is possible, I'd certainly like to see it available in the emulators on the pandora. :p

Bad news for you: You aint gonna get that last image as that is like 5X native resolution or so. The Pandora is mostly 2X native. I guess you could do the same effect but it would be much more noticeable and would slow things down allot.

Well, yeah, obviously it'd be 640x480. I just wanted to use a high resolution to show it better. Is a render to texture then render to screen really that intensive? D:
 
Last edited by a moderator:
DaveC said:
As far as "more pure" nothing will look like a CRT except a CRT the only thing kind of close is OLED. 1:1 looks "more pure" even if less correct as in slightly off aspect.

This is 100% personal preference (although I appreciate your use of quotations), It should be made an option in emulators.

DaveC said:
I think sometime what looks better may not be the most correct. Usually scaling using even multiples like 1:1, 2:1, etc look better than 1.5:1 or 1.25:1 etc. Since you can't light half of an LCD pixel you have to unevenly double some and leave others at 1:1 etc. That tends to look hideous. In these cases I prefer the "less pure" 1:1, 2:1 H+V etc. The problem with filters is they mostly tend to indiscriminantly blurr adjacent pixels together even ones above and below. This leads to loss of contrast and a fuzzy look. It doesn't look like a CRT it looks like a busted LCD. TVs never blurred pixels vertically they were pretty separated by scanlines. Horizontally they did a bit although RGB arcade monitors were pretty sharp.

The thing is, blinear filters are an approximation for removing high frequency components, which are not just removed by fuzzy CRT pixels (they're definitel a bit fuzzier, especially on older TVs) but by band limiting in the outpt signal. I agree with you on scanlines though. This is why the author of Mednafen likes to run things at really high horizontal resolution and integer doubled vertical resolution with a scanline filter. An NTSC filter at really high resolution looks pretty authentic too, but I don't think this is the kind of authenticity people prefer.

ie, http://img6.imageshack.us/img6/6384/ffvihoriz7.png
vs http://img127.imageshack.us/img127/6417/ffviscreen.png

I think the NTSC filter definitely looks better from a purely aesthetic point of view in this case. I haven't played with a raw integer scaling on a desktop for as long as I can remember, it's just too blocky for me. Would be fine on Pandora though.

DaveC said:
I think the reason the scaling looks nice in emus like Temper is because it only applies the filter horizontally and not vertically. This is as close to a CRT as you can probably get as if gives a slight blur horizontally but keeps vertical lines sharp. I did actually use it in Temper on the Wiz for awhile because the screen size made a bigger image more desirable, and the filter looked pretty good. I do admit though that I fell back to 1:1 even on the Wiz as there is some rippling/distortion that in the end got to me a bit. I bet if the same technique was used on the Pandora, filter for horizontal only to get correct aspect but vertical it was 2:1 without filtering it would look pretty good.

Yes, I think that an 8->10 filter can look slightly better than 4->5, and with NEON it'll be mostly bandwidth limited and should take 1-2ms which isn't very bad (this is based off of 320x240->640x480 scaling/color space conversion tests I was doing for zodttd). If done per-scanline it'll be even faster.
 
Last edited by a moderator:
paulguy said:
Yeah that's pretty much what I'd want. Although, does that use a shader? I'm not sure if the shaders work in Linux.
Screenshot was taken on Linux, so yeah, shaders work in Linux ;)
 
Last edited by a moderator:
I like this one:

phosphor3xv0001.png

So much that I implemented it in my emulator. I've had a lot of positive feedback about it, too.


Edit: Here's a few more images of it at various scaling levels:

phosphornxgng.png


D.
 

Attachments

  • phosphor3xv0001.png
    phosphor3xv0001.png
    86.9 KB · Views: 134
  • phosphornxgng.png
    phosphornxgng.png
    439.1 KB · Views: 166
I like it. Would there be a way to filter EVERYTHING that comes on the Panda screen like that?
 
wea0 said:
(Before anyone asks: bsnes will never run on Pandora)
bsnes as found today on PC's - surely not. but a bsnes re-factored for the pandora - who knows? i don't see it as so decided. if a 1.6GHz p5 core w/ 1MB of L2, 533MHz FSB can run it well, a 600MHz A8 w/ 256KB L2+ 430MHz DSP w/ 112KB cache + a little help from the 530 may not fall that far behind. /optimist
 
Last edited by a moderator:
darkblu said:
if a 1.6GHz p5 core w/ 1MB of L2, 533MHz FSB can run it well

Uh, it can't. Not even close. Where did you get this idea? Minimum requirements are more around the 1.6GHz Core 2 end (not dual core necessarily), an about 2x performance improvement.

Plus, it'd take an awful lot of "refactoring" to go from something that runs on a Atom to something that runs on an OMAP3; bsnes is a highly synchronized emulation that wouldn't benefit a lot from the DSP. The SGX would be even more irrelevant (except for non-emulation related effects). As for the CPU, take note that clock for clock Atom performs better running Ari64's Mupen64plus than OMAP3 does - the memory latency is poor and the CPU throughput itself is comparable. Now you're talking about a clock advantage of 2.67x. Why would you think that any emulator has this kind of easy slack w/o being rewritten from the ground up? I doubt you'd ever achieve the same kind of accuracy and then, what's the point?

I don't mean to sound terribly offensive but I'm a little surprised someone as knowledgeable and experienced as yourself would suggest such an impossible thing..
 
Last edited by a moderator:
I imagine snes9x wouldn't be too unreasonable. If it doesn't run now someone could probably chop it down to work, at least more likely than bsnes.
 
Exophase said:
Uh, it can't. Not even close. Where did you get this idea? Minimum requirements are more around the 1.6GHz Core 2 end (not dual core necessarily), an about 2x performance improvement.
i got the idea from the following thread on the bsnes forums, where i believe you originally took part too. apparently the 330 atom is not the ideal cpu (does not meet the min system requirements, etc), but it was not reported as unusable. maybe i should not have used the phrase 'run well', my bad.

Plus, it'd take an awful lot of "refactoring" to go from something that runs on a Atom to something that runs on an OMAP3; bsnes is a highly synchronized emulation that wouldn't benefit a lot from the DSP.
well, the amount of refactoring i had in mind was indeed arbitrary, but isn't the sound system of the SNES highly autonomous? perhaps suitable for dynarec to the C64x too?

The SGX would be even more irrelevant (except for non-emulation related effects).
compositing planes/larger sprites/etc? the per-scanline tweaks of the snes' PPU could be replicated through texture lookups and/or some shader work, too.

As for the CPU, take note that clock for clock Atom performs better running Ari64's Mupen64plus than OMAP3 does - the memory latency is poor and the CPU throughput itself is comparable. Now you're talking about a clock advantage of 2.67x. Why would you think that any emulator has this kind of easy slack w/o being rewritten from the ground up? I doubt you'd ever achieve the same kind of accuracy and then, what's the point?
frankly, i has hypothesizing from the POV of taking a well-portable, cpu-centric desktop emu to the pandora by taking advantage of some good parallelism. also, correct me if my assumption is wrong, but shouldn't the bulk of bsnes' resources go for the emulation of the sound system? if that could be nicely offloaded to the C64x then the remainder (i.e. cpu, ppu, dma) could be left to the A8 + sgx. the worse memory system performance of the cortex would be serious detrimental factor, though, i admit.

I don't mean to sound terribly offensive but I'm a little surprised someone as knowledgeable and experienced as yourself would suggest such an impossible thing..
no offense taken. i think such a task would be an interesting academic project ; )
 
Last edited by a moderator:
darkblu, I think you might not be very familiar with bsnes. The "CPU-centric portable SNES emulator" need was met several years prior with SNES9x. bsnes was designed to be a "reference" emulator that's as accurate as possible while also being very clearly written - although some optimizations were made it was never written with high performance in mind and never while compromising any facet of its accuracy. We're talking an extreme level of timing precision, down to master clock cycles between several chips. Few emulators (much less SNES emulators) are written with this level of accuracy in mind - largely because it's not needed to play most (and when youhit a certain point, all) games and more serves as documentation, a more homebrew-safe platform, and satisfying academic interest, perfectionism, and peace of mind. It's not really what the goal should be for a platform that is not at the performance level established for needing this.

What you're describing (in as much extent as it'd even work) would definitely be a compromise to accuracy because you just can't maintain the extreme level bsnes operates at w/o maintaining the ability to switch between what component you're emulating at a really fine level. Especially if you're now throwing recompilers into the mix. This doesn't really make sense because computationally getting things most of the way there doesn't require much in the way resource-wise (SNES emulators on DS pull this off okay with a 33MHz ARM7) so suggesting things like recompilers for SPC700 code is silly. Audio is not really where the big speed hit is for bsnes; that's in IRQ tracking (lots of checks constantly) and PPU emulation. The intense "bit-perfect" audio emulation I was describing earlier is for the fixed function DSP emulation. It can be done much faster with a negligable accuracy cost - the code used (blargg's core) even has this option. That I said this bit-perfect core would perform poorly on Pandora was purely academic.

There's a good reason that emulators haven't been using GPU shaders for 2D console emulation - it just doesn't fit. Read some of my posts on this thread: http://vba-m.com/for...ress-t-455.html Although it's related to GBA a lot of similar problems apply. Even the basic per-pixel operations involve bit unpacking operations for looking up a tile map entry, generating a texture address inside a tile with conditionally flipped and mirrored sub-indexes, and fetching a a palette entry from a texture. Yhe only way to accomplish per-tile priority would be to draw 8 pixel lines instead of screen width lines, with varying depth or stencil. That's at least 8 * 224 * 4 * 60 = 1.7m polygons per pass. You'd need two passes for main-screen and subscreen rendering (no MRT on OGL ES 2, and I doubt PMAP3's SGX is good at it) and a third pass to combine them. To handle sprite ordering correctly wrt BG rendering (I call it the "sprite mangling" effect) you'd need to render sprites and BG in separate passes too, so it's now up to 5. In order to handle mid-frame effects effects you'll need a queuing system and you'll need to carefully track accesses to VRAM, palette, and OAM to keep it coherent with the SGX's textures.

It might be possible to construct this logistically, but I doubt it'd run acceptably on SGX530 on OMAP3, with its one TMU at 110MHz and limited bandwidth (not even including a final pass to scale the result).

I think that in the end of the day IF you got anything working at all it'd be much less accurate (and compatible) than bsnes and prone to performance issues under non-ideal circumstances. It would also be a near-total rewrite of bsnes that would barely resemble it, and wouldn't be as portable (relying on OGL ES 2+ or worse, C64x+). Meanwhile, the usual ARM optimized SNES9x ports would probably be more compatible (outside of bugs) and would be less battery hungry by virtue of having more efficient ARM code and not needlessly taxing the SGX and DSP.

I'm just not seeing the point.
 
Exophase said:
The thing is, blinear filters are an approximation for removing high frequency components, which are not just removed by fuzzy CRT pixels (they're definitel a bit fuzzier, especially on older TVs) but by band limiting in the outpt signal. I agree with you on scanlines though. This is why the author of Mednafen likes to run things at really high horizontal resolution and integer doubled vertical resolution with a scanline filter. An NTSC filter at really high resolution looks pretty authentic too, but I don't think this is the kind of authenticity people prefer.

ie, http://img6.imageshack.us/img6/6384/ffvihoriz7.png
vs http://img127.imageshack.us/img127/6417/ffviscreen.png

I think the NTSC filter definitely looks better from a purely aesthetic point of view in this case. I haven't played with a raw integer scaling on a desktop for as long as I can remember, it's just too blocky for me. Would be fine on Pandora though.

The Phosphor filters do look nice. Much nicer than that bilinear fuzz filter that many emus on the PSP use. This filter looks very CRT-like to me with aperature screen look. Looks more arcade RGB monitor like actually but that would be great for anything. Many consoles like SNES could output RGB (I had mine connected to an RGB monitor) and in many countries SCART RGB was used and looked like that.

Since Temper will Be so easy to run on the Pandora due to its much faster speed than a Wiz or GP2X could an option like this be done? You could integer 2X vertically but maybe use this to get correct aspect horizontally maybe?

It would be nice if some of these modes could be done then passed around for all emu coders to implement.
 
Last edited by a moderator:
Exophase said:
There's a good reason that emulators haven't been using GPU shaders for 2D console emulation - it just doesn't fit. Read some of my posts on this thread: http://vba-m.com/for...ress-t-455.html Although it's related to GBA a lot of similar problems apply. Even the basic per-pixel operations involve bit unpacking operations for looking up a tile map entry, generating a texture address inside a tile with conditionally flipped and mirrored sub-indexes, and fetching a palette entry from a texture. The only way to accomplish per-tile priority would be to draw 8 pixel lines instead of screen width lines, with varying depth or stencil. That's at least 8 * 224 * 4 * 60 = 1.7m polygons per pass. You'd need two passes for main-screen and subscreen rendering (no MRT on OGL ES 2, and I doubt OMAP3's SGX is good at it) and a third pass to combine them. To handle sprite ordering correctly wrt BG rendering (I call it the "sprite mangling" effect) you'd need to render sprites and BG in separate passes too, so it's now up to 5. In order to handle mid-frame effects effects you'll need a queuing system and you'll need to carefully track accesses to VRAM, palette, and OAM to keep it coherent with the SGX's textures.
Thank you for the reference to that thread - you do raise interesting issues there. But they hold only when the CPU & PPU emulations are in sync. My original thought was of the GPU doing frame-wide work (versus scanline work), by being out of sync with the CPU emulation, i.e. at a frame of latency compared to the CPU emulation - sort of like deferred PPU emulation. My reason for that (again, corrections to my view are welcome) is that while the CPU can interfere per-scanline in the affairs of the PPU, the normal CPU workpath does not try to read back same-frame output from the PPU. Or in other words, the CPU & DMAs are never consumers of the work of the PPU (in the same frame). And feedback is the fundamental problem of graphics pipelines - if that is avoidable everything else is much simpler.

Returning to in-sync PPU emulation on the GPU, what are the operations you'd consider most detrimental to the GPU freely handling one scanline at a time? The bit-wise ops may not be necessarily much of an issue despite the fact ES2 GLSL still lacks them; I've been doing bit unpacking in shaders for generations now - starting from SM2/ARB_program the proverbial GPU ALU power is enough to allow a reasonable amount of bit-wise ops emulation (in floats, at that!) at the cost of excessive arithmetics.

Actually, if you care to further break down your view on the passed needed for emulation of the PPU, we may try to figure out ways to collapese them. For academic sake : )

Exophase said:
Especially if you're now throwing recompilers into the mix. This doesn't really make sense because computationally getting things most of the way there doesn't require much in the way resource-wise (SNES emulators on DS pull this off okay with a 33MHz ARM7) so suggesting things like recompilers for SPC700 code is silly.
For the record, I meant the SPC + DSP block, not the SPC alone. And dynarec was the wrong term - more like JIT for the C64x. Anyway, audio is much further from my sphere of interest than PPU is, so I'll just shut up on that for the time being.
 
Last edited by a moderator:
darkblu said:
Thank you for the reference to that thread - you do raise interesting issues there. But they hold only when the CPU & PPU emulations are in sync. My original thought was of the GPU doing frame-wide work (versus scanline work), by being out of sync with the CPU emulation, i.e. at a frame of latency compared to the CPU emulation - sort of like deferred PPU emulation. My reason for that (again, corrections to my view are welcome) is that while the CPU can interfere per-scanline in the affairs of the PPU, the normal CPU workpath does not try to read back same-frame output from the PPU. Or in other words, the CPU & DMAs are never consumers of the work of the PPU (in the same frame). And feedback is the fundamental problem of graphics pipelines - if that is avoidable everything else is much simpler.

I am currently using such a deferred per-frame renderer and it works. However, for SNES there is at least one per-scanline feedback path via the sprite overrun detection register. bsnes has a heavy emphasis on making these corner cases work (even though no game even checks this register) w/o relying on difficult special paths. These approaches are just veryun-bsnes, it's hard to see how this can be considered "bsnes refactored."

darkblu said:
Returning to in-sync PPU emulation on the GPU, what are the operations you'd consider most detrimental to the GPU freely handling one scanline at a time? The bit-wise ops may not be necessarily much of an issue despite the fact ES2 GLSL still lacks them; I've been doing bit unpacking in shaders for generations now - starting from SM2/ARB_program the proverbial GPU ALU power is enough to allow a reasonable amount of bit-wise ops emulation (in floats, at that!) at the cost of excessive arithmetics.

But the amount of ALU available per pixel on SGX is very low, and in this case being deferred won't win you as much as you'd like.

I'm glad you responded today because I intended on amending my post with more complications. Rendering on SNES - or most any 2D console, but especially one with a lot of layers - is very heavy in per-pixel transparency. You can roll into the tile and map caching analysis for sections of fully opaque, fully transparent and partially transparent tiles, but you will still inevitably see big regions of alternating transparency (sprite edges, backgrounds like the chain gates in Super Mario World, etc). Rendering these regions with alpha test will cost you severely on SGX. Using alpha blend instead means you lose proper depth/stencil operation for layer priority sorting and have to manually sort - this means breaking up layers by their per-tile priority. You still lose the benefits of deferred shading and gain the overhead of alpha blending (part of which will be taking from your precious per-pixel ALU budget)

darkblu said:
For the record, I meant the SPC + DSP block, not the SPC alone. And dynarec was the wrong term - more like JIT for the C64x. Anyway, audio is much further from my sphere of interest than PPU is, so I'll just shut up on that for the time being.

"JIT" and "dynarec" are more or less synonymous. Recompiling the fixed operation DSP doesn't really make sense, its configurable variability is probably low enough to be switched out.
 
Last edited by a moderator:
darkblu said:
Exophase said:
As for the CPU, take note that clock for clock Atom performs better running Ari64's Mupen64plus than OMAP3 does - the memory latency is poor and the CPU throughput itself is comparable. Now you're talking about a clock advantage of 2.67x. Why would you think that any emulator has this kind of easy slack w/o being rewritten from the ground up? I doubt you'd ever achieve the same kind of accuracy and then, what's the point?
frankly, i has hypothesizing from the POV of taking a well-portable, cpu-centric desktop emu to the pandora by taking advantage of some good parallelism. also, correct me if my assumption is wrong, but shouldn't the bulk of bsnes' resources go for the emulation of the sound system? if that could be nicely offloaded to the C64x then the remainder (i.e. cpu, ppu, dma) could be left to the A8 + sgx. the worse memory system performance of the cortex would be serious detrimental factor, though, i admit.
N64 emulation is largely bound by memory latency, and needs a 2-4MB L2 to run efficiently. The 256K cache on the OMAP3 is a bit small for this application. Atom does somewhat better with its 512K cache.

SNES is an entirely different beast. With only 128K main memory, plus 64K audio RAM and 64K video RAM, its memory access pattern is a lot more constrained.

I'm not sure that doing a dynarec for the SPC700 is worthwhile since it would have to deal with a lot of self-modifying code and the SPC700 is only 1MHz. The SPC700 runs independently most of the time, but when it passes data back and forth to the main CPU, you need to sync every few cycles. This makes it pretty difficult to do on a seperate CPU since you need to rapidly switch from 65816 emulation to SPC700 and back as soon as some data is written to the interface between the processors.

Syncing with the PPU isn't quite as bad since the CPU can't access VRAM while graphics are being displayed, but you still need to sync with the PPU pretty frequently. If HDMA is active then once per scanline, approximately every 50-100 instructions depending on the cycle times and clock rate. You could sync less frequently as long as the CPU doesn't read any PPU registers and you know there aren't any IRQs pending.

It would be possible to do a dynarec for the 65816, but I've never seen a SNES emulator that does. As with most von Neumann architectures of the era, you will have to deal with self-modifying code, but it may be feasible to only recompile the ROM code and interpret any code in RAM one instruction at a time. I guess the same could be done for the SA-1, but I don't know enough about typical SA-1 code to tell you if this is worthwhile or not. If there are a lot of IRQs then it could get messy.
 
Last edited by a moderator:
Ari64 said:
SNES is an entirely different beast. With only 128K main memory, plus 64K audio RAM and 64K video RAM, its memory access pattern is a lot more constrained.

You're ignoring the ROM which is up to 4MB directly addressable. That isn't to say SNES games are routinely churning data off of them but it's still a big pool for graphics.

SNES9x also likely chews through a lot of cache by having multiple screen sized rendering buffers - this is where a more strictly scanline-based renderer has a performance advantage (ironically, bsnes would). I assume that PSP and GP2X likely suffered here because of their small caches. 256KB of L2 should significantly offset this, but it might still be limited.
 
Last edited by a moderator:
Back
Top