Wy Did They Not Build In Mmu In Both Processors?


Khayman

Still Fresh
Joined
Jul 5, 2006
Messages
89
So the gp2x have two processors... but only one have mmu (what is a mmu?) so linux can only see the one with mmu...
How can we get to use the other processor? and why did they choose (is it spelled like that?) to make like that???
 
MMU= Memory management unit.

Self explanitory on what it does. Only first processor has this, and therefore we have to use the first processor to "talk" to the second one, and the same must be done to get the processed information back (SLOW to transfer like this, but it can be faster if there is less "talking").

Magiceyes made the CPU/Board, so that MPEG4 decoding would be accelerated. This machine is supposed to be a Pocket Media Player, and nothing more...

Next time before asking what something is, why don't you look it up?
 
I think there wasn't enough room on the chip for it. The 2nd CPU (940) is also missing something that takes up a lot more room - it only has 8KB of cache compared to 32KB. So I'm guessing they couldn't fit two whole 920's with full cache and everything, and I doubt they make a 940 with an MMU, so they just combined them the best way feasible.

Plus, if you had a 940 with an MMU, it could cause slowdowns if the OS decided to run a program on it when the 920 was available, simply because it has less cache. This could be overcome with some OS scheduler improvements, so I think the first reason is why it's not there.

In some ways it's nice that the 940 isn't controlled by linux - it gives programmers total freedom over how to use it (interrupt handling, etc), except for the not being able to access the 920's memory easily.

Edit: To answer your other question, MMU = memory management unit - it's what allows the operating system to provide a different address space to the programs than what the machine's memory layout is in reality. This doesn't sound too important by itself, but it can also catch invalid accesses and let the operating system handle them. This is how swap memory works, for one thing. The OS sets up the MMU to provide whatever memory the program has that's actually in memory at the time, and when the program needs something that isn't, the OS catches the error and loads that data, and the program continues. There's other tricks that can be done with it to improve how memory is used too (programs can share memory when they load the same library, etc).
 
BradN posted on Jul 24 2006 at 10:20 PM said:
...and I doubt they make a 940 with an MMU...

Yepp, because then it'd be a 920 as the second digit specifies the processors memory management capability. ;)

I think this is an intresting question as having two 920s would be more uniform. Maybe the price differance is significant and the only reason to have a 920 in the first place is to support linux (as I suspect a multi-user and multi-process OS would need one). Hmmm...
 
Last edited by a moderator:
Shikaku posted on Jul 24 2006 at 10:14 PM said:
MMU= Memory management unit.
Next time before asking what something is, why don't you look it up?
The main question was not what a mmu is... the main question was why the buildt it like they did!
Thats why i didnt search for the answer about mmu!
Thanks "BradN" for a very informative answer that even i do understand :)
 
Last edited by a moderator:
And that doesn't mean we cant still use the chip either, its just that linux won't help us with that, we gotta do that ourselves in code.
 
craig should have tried to convince them to make it into a more beefier game player than a media player, theres loads of media players out there, a lot better than the 2x.. so why would anyone buy the 2X for music/video when they knew the whole gp32 community was expecting to be able to play games on it and use it as a game player.. lets hope they dont make the same mistake with the gp3x
 
Paradox posted on Jul 25 2006 at 06:35 AM said:
craig should have tried to convince them to make it into a more beefier game player than a media player, theres loads of media players out there, a lot better than the 2x.. so why would anyone buy the 2X for music/video when they knew the whole gp32 community was expecting to be able to play games on it and use it as a game player.. lets hope they dont make the same mistake with the gp3x
I don't see a better option, look in my sig for a link to the MMSP2.

The only other option would be the Vrender3D, and it sucks worse than the MMSP2 because it has much less in the features, to make room for the 3D and 2D cores.
 
Last edited by a moderator:
The GP2X wasn't made as a media player primarily as far as I can tell-- the MMSP2 probably was; but MagicEyes/MES lists a lot of other potential uses for it. I mean, it's a little ARM9 computer that can do about anything; it just happens to have MPEG4 decoding support. PCs from the mid 90s that came with an MPEG2 decoder card to play DVDs didn't disqualify them from being PCs and make them DVD players with nifty extras. :)

There are fancier SoCs available, just not from MagicEyes. If you wanted more graphical horsepower for games, the VRender3D would be a better choice since the GPU can also be used for 2D acceleration through OpenGL. But you lose the processing power from the 940T which is being exploited in recent programs. It's a toss-up there.
 
Epicenter posted on Jul 25 2006 at 11:10 PM said:
There are fancier SoCs available, just not from MagicEyes. If you wanted more graphical horsepower for games, the VRender3D would be a better choice since the GPU can also be used for 2D acceleration through OpenGL. But you lose the processing power from the 940T which is being exploited in recent programs. It's a toss-up there.

How would openGL effect emus though? If you have to go through an API to do everything that could be bad. Right now linux is causing problems because of not being able to access the screen directly. Open GL seems even worse. If the functions aren't there to be used how you want them and force you to do it the GL way that can be a problem. It seems with newer and newer hardware they are keeping you farther away from the metal. This extra overhead keeps many coders from getting the most out of it when pushing to the limits rather than just calling the pre-done features. You are not programming hardware anymore you are programming other code that instructs the hardware. This "middleman" is a problem from slower devices in which you are trying to eek out the last CPU cycle from. there are just too many layers of code running sucking up precious CPU time. I guess that is not a huge problem when you have 4 GHz and gobs of RAM at your disposal but on these small devices it is less practical. Things are just getting too bloaty. These handheld devices seemed to be the last bastion of lean and mean systems in a world of big bloated RAM suckin' CPU hoggin' computer OSs and APIs. I wonder what it would be like if devs had unrestricted access to the hardware of the GP2X now? I bet emus would run alot faster, Squidges MMU hack gives us a taste of that.
 
Last edited by a moderator:
DaveC posted on Jul 25 2006 at 04:22 PM said:
[buncha shit]

Everything you tell OpenGL to do will be done in hardware by the GPU, rather than in software by the CPU, this is what makes your computer able to render stuff quickly. GL is useful for 2D as well as 3D, supporting stuff like rotation and primitives rendering, as well as 3D rendering.

That is the one advantage VRender3D has over MMSP2.
 
Last edited by a moderator:
DaveC posted on Jul 26 2006 at 12:22 AM said:
Epicenter posted on Jul 25 2006 at 11:10 PM said:
There are fancier SoCs available, just not from MagicEyes. If you wanted more graphical horsepower for games, the VRender3D would be a better choice since the GPU can also be used for 2D acceleration through OpenGL. But you lose the processing power from the 940T which is being exploited in recent programs. It's a toss-up there.

How would openGL effect emus though? If you have to go through an API to do everything that could be bad. Right now linux is causing problems because of not being able to access the screen directly. Open GL seems even worse. If the functions aren't there to be used how you want them and force you to do it the GL way that can be a problem. It seems with newer and newer hardware they are keeping you farther away from the metal. This extra overhead keeps many coders from getting the most out of it when pushing to the limits rather than just calling the pre-done features. You are not programming hardware anymore you are programming other code that instructs the hardware. This "middleman" is a problem from slower devices in which you are trying to eek out the last CPU cycle from. there are just too many layers of code running sucking up precious CPU time. I guess that is not a huge problem when you have 4 GHz and gobs of RAM at your disposal but on these small devices it is less practical. Things are just getting too bloaty. These handheld devices seemed to be the last bastion of lean and mean systems in a world of big bloated RAM suckin' CPU hoggin' computer OSs and APIs. I wonder what it would be like if devs had unrestricted access to the hardware of the GP2X now? I bet emus would run alot faster, Squidges MMU hack gives us a taste of that.

I love you DaveC, and your weird form of unbiased fanboyism :huh:. No I don't quite understand that phrase either, but it fits.
 
Last edited by a moderator:
Blah posted on Jul 25 2006 at 11:29 PM said:
DaveC posted on Jul 25 2006 at 04:22 PM said:
[buncha shit]

Everything you tell OpenGL to do will be done in hardware by the GPU, rather than in software by the CPU, this is what makes your computer able to render stuff quickly. GL is useful for 2D as well as 3D, supporting stuff like rotation and primitives rendering, as well as 3D rendering.

That is the one advantage VRender3D has over MMSP2.

That is fine if you want to display some "sprites", rotate them etc. But if you want to do something out of the feature set then what? What if you need to render some 2D screens per scanline or per pixel, read some screen locations directly and do some raster interrupts for effects in emus? You know everything is not primitive or polygon based. Yeah that stuff makes my computer render quickly *IN 3D* not everything can be broken down into 3D polygons and primitives though. It will be nice for 3D games but more limited in emus. Again why all of this *need* for 3D? we don't have enough that now that every mainstream console and handheld is 3D? Jezzus enough already.
 
Last edited by a moderator:
DaveC-- you've got a few things mixed up here.

First off, the bit about OpenGL slowing things. Right now, you have an API between the main program (let's use for an example, SquidgeSNES) and the video hardware; here, a blitter and a framebuffer it writes into, which then forms the image on the screen in the end. SDL is that layer, which feeds into a video driver in the kernel and in turn, into the blitter, framebuffer and screen. There are software layers with the blitter in the MMSP2 *or* the OpenGL-based GPU in the VRENDER-3D. OpenGL would most likely just be implemented through SDL as is quite common-- you can utilize OpenGL contexts within it quite neatly. Or the process could be automated like Paeryn's 'HW-SDL', kept hidden from the user but behind the scenes, accelerating graphics operations. If developers want to get closer to the metal, they can write instructions directly for the GPU through their own lighter API much akin to how Minimal Lib works.

Full hardware-based OpenGL support on a GPU such as that in the VR-3D would allow not only such fun gimmicks as hardware scaling and rotation, but also hardware-rendered alpha effects and transparency (of enormous obvious benefit to SNES emulation, and down the line, GBA emulation), and would also likely accelerate the drawing process of many 2D sprites (e.g. tiles in an emulator, or larger graphics in a homebrew game like Stargazer) to raise framerate during intense drawing operations. Decreased CPU utilization in draw operations may also be a benefit, freeing cycles for use in CPU core emulation. On top of that are the benefits of hardware 3D rendering.

As for the bit comparing the GP2X's available hardware next to your average PC's and comparable tolerance for bloatware-- it's important to remember that while software has become more bloated over time this has been on much a different side of the ballpark, DirectX, not OpenGL-- the former also being a much more recent development. Even then, DirectX is not that big a burden in efficiently accessing graphics hardware. It's also important to remember that while a 200 MHz ARM9 processor may not seem powerful compared to something running at 4000 MHz like a slightly-overclocked P4, note how a CPU clocked closer to 2 GHz like an Athlon 64 or Core 2-series chip outperforms a 4 GHz P4 unit! Clockrates can't be compared like apples and apples or treated as the sole determination of performance. Nor, as we've discussed before, can the capability of the GP2X when programmed for directly be compared to the sorts of games it can run in emulation-- the capabilities of the GP2X blow away those of any of the systems it can emulate hundreds of times over.

Linux preventing access to the screen and getting in the way of full speed is no major issue. I would not even doubt that direct access to the display could be achieved. Regardless, performance gains from removing Linux from the equation are minimal-- nearly nothing on the OS side of things is using many CPU cycles at all. The Linux OS uses a good chunk of RAM, but nowhere near enough to impede on the '2x's impressive 64MB capacity-- remarkable compared to any other portable available today, and the vast majority of it remains free even with Linux running. As for OpenGL, use of this API would have a negligable impact on RAM. No one complains that SDL uses too much RAM-- and OpenGL would use far less. As it stands, as a developer you have around 50MB to play with, with Linux running!

Now, what's this about Squidge's MMU hack being a taste of performance without an OS or APIs in the way? That's not what it's accelerating at all. Normally, the ARM920T can access the upper 32 MB of RAM in the GP2X but it is not cacheable, so access is fairly slow. The MMU hack allows the upper 32MB to be cacheable as well so it will be able to perform as well as the lower 32MB, providing a dramatic performance boost for programs like GNGeo that store huge ROM binaries in upper memory, including graphics data, or programs that make heavy use of the framebuffer residing in the 32MB upper memory area. The simple fact of the matter is that most of the gains of programming with no OS or API for the machine can be had through the use of ARM9 ASM, the MMU hack and just plain efficient code, and it comes with the benefit of enticing developers who don't want to go through the huge hassle of programming 'on the metal'. Were it not for Linux and SDL on the GP2X we'd have nowhere near the developer base we do today.

DaveC said:
That is fine if you want to display some "sprites", rotate them etc. But if you want to do something out of the feature set then what? What if you need to render some 2D screens per scanline or per pixel, read some screen locations directly and do some raster interrupts for effects in emus? You know everything is not primitive or polygon based. Yeah that stuff makes my computer render quickly *IN 3D* not everything can be broken down into 3D polygons and primitives though. It will be nice for 3D games but more limited in emus. Again why all of this *need* for 3D? we don't have enough that now that every mainstream console and handheld is 3D? Jezzus enough already.

You're really confused about this whole topic, aren't you? .. A GPU is just a microprocessor that's good at handling polygons, but is excellent for MANY graphical operations, 2D or 3D. Or just floating point and vector calculations. There's nothing that makes it a polygon engine alone. Why "sprites" in quotes? You don't HAVE to render sprites on polygons, you know. You can just as easily draw them in 2D space with a GPU. That's just one manner in which some 3D consoles like the PSX or Saturn pulled off accelerated 2D with a polygon engine.

There is nothing to keep you from doing more direct drawing operations; just as no one is MAKING you use the blitter in the GP2X. You don't have to use OpenGL if you don't wish to; if you want, write to the framebuffer directly, pixel by pixel, line by line, however you wish. At any rate, the implementation of a GPU won't add any limitations the GP2X doesn't have. If anything, though, a 3D GPU would be able to achieve levels of performance the '2x's blitter likely won't ever achieve.
 
Last edited by a moderator:
Epicenter posted on Jul 25 2006 at 04:10 PM said:
But you lose the processing power from the 940T which is being exploited in recent programs. It's a toss-up there.
I have been looking, which programs are using it? If you have a couple of thread links, go ahead, or just the names of the programs would be fine and I can find the threads myself.
 
Last edited by a moderator:
Vobbo's adding some use of the 940T to Hu6280, multiple 3D engines are utilizing it like a GPU, and there may be potential to use it like a 2D accelerator in the future. Plenty of possibility to tap its resources for talented programmers.
 
nubie posted on Jul 26 2006 at 05:09 PM said:
Epicenter posted on Jul 25 2006 at 04:10 PM said:
But you lose the processing power from the 940T which is being exploited in recent programs. It's a toss-up there.
I have been looking, which programs are using it? If you have a couple of thread links, go ahead, or just the names of the programs would be fine and I can find the threads myself.
My programs Vektar and Zooov both use the 940.
 
Last edited by a moderator:
I just want to say that epicenter seems to know his stuff.. every time I read his posts I am like, "What does this guy do in his free time?" Are you an electronic engineer or do you just love the hell out of some electronics.
 
Back
Top