Ram & eMMC costs / power drain


While exo and the above info discarded 2 chip 2GB, you could just as easily make it theoretically feasible to power down 2 of 4 chips.

Turning away from a profitable 97%, when it means having the same platform for all, isn't a power concern. If we then factor in that the savings for the 3% aren't great with 4 chips of 512MB, then there isn't a whole lot of sentiment there to get worked up about.

Gimping drastically on speed would just limit the whole thing. A lot of memory operations arent disk-related even. You end up in a less capable device that very likely will burn more power for the same tasks. Where tasks are limited by said decision.

A variation over the theme "saving oneself poor", as we say.

I dare assumpt 8GB is a more sought after alternative than 2GB at this point.

And think about how much you are saving on going 4GB then :)
 
Last edited:
Well the CPU is 32 Bit, so using 8GB would be awkwardly(not impossible) "slow"
 
And secondarily, if we assume the 2-chip solution doesn't allow dual-channel memory, how much of a difference would that make to different tasks? A lot of tasks will be limited by the speed of the SD card, or the performance of the CPU or the GPU in my experience, so faster ram isn't such a big win. Agree?

I would argue that the issue here isn't about the 4gb option being faster but rather the 2gb being considerably slower. There's a difference between one option being slower than it was designed to be and the other being faster by design.

-Neelix
 
Well for transfers that use the full bandwidth it would be literally half speed wouldn't it? Of course you're right that how much difference that makes in real-world applications is something that could only really be determined through experience. I would go out on a limb though and say that a lot of the heavier games may well take a performance hit in terms of framerate.

-Neelix
 
Hmm, anyone know the cache sizes on the OMAP5432? I see it has 2MB of L2 cache, which suggests there's also L1 cache on each core, but it's probably relatively small compared to the L2 one. I see that using both channels, the CPU supports 8.5GB/s throughput, which on a 60fps game would be 145MB per frame - a single channel version would only get 73MB per frame. Unfortunately, I don't understand the GPU well enough to estimate whether that would be enough for a demanding 3D game - I understand it needs to pull in the screen RAM and push it back out again after rendering to it, but that's only 7.2MB by my calculations. Can it store texture and vertex data or does it need to read that every frame as well? Even if it does, there'd need to be nine times as much data to make a single channel implementation skip a frame - 18 screens worth. Personally, I doubt anyone makes textures that big on anything I'm planning to play, but I don't make games for a living so perhaps I'm wrong on that one.
 
Why? Is there a significant overhead with the page table setup involved?
Lets just say that the maximum directly addressable memory on a 32-Bit system is 4 GB. If you go beyond that, you find bugs, lizards and dragons ....

More seriously:
It simply introduces more complexity and strange bugs if you can not address all the memory with the native width of the CPU. Most of the time it is no problem, but once you hit the boundary(a developper never does because his system rarly uses more that 4GB) you come across a lot of rather strange bugs(not bugs in your code, but bugs in other peoples code that make no sense to them and you whatsoever). It is hard to describe, but every book/paper/article that says otherwise lies.

Jörg
 
As far as I know, Cortex A-brand processors still use a memory management unit - an MMU. Different processes have different configurations of the MMU, so one process can be using the first 4GB and another the other half, meaning you only need two processes to max out the RAM, but it does mean one process can only address 4GB straightforwardly.
 
Hmm, anyone know the cache sizes on the OMAP5432? I see it has 2MB of L2 cache, which suggests there's also L1 cache on each core, but it's probably relatively small compared to the L2 one. I see that using both channels, the CPU supports 8.5GB/s throughput, which on a 60fps game would be 145MB per frame - a single channel version would only get 73MB per frame. Unfortunately, I don't understand the GPU well enough to estimate whether that would be enough for a demanding 3D game - I understand it needs to pull in the screen RAM and push it back out again after rendering to it, but that's only 7.2MB by my calculations. Can it store texture and vertex data or does it need to read that every frame as well? Even if it does, there'd need to be nine times as much data to make a single channel implementation skip a frame - 18 screens worth. Personally, I doubt anyone makes textures that big on anything I'm planning to play, but I don't make games for a living so perhaps I'm wrong on that one.

L1 caches on the CPU cores are 32KB data cache and 32KB instruction cache.

A 32bpp framebuffer is only 3.5MB, assuming that a single resolve is needed per frame w/the TBDR GPU, and a depth buffer isn't needed. Although this isn't necessarily true with things like shadow mapping, which may require multiple resolves, including depth.

Yes the textures and vertex data generally have to be read every frame, there are texture caches for the GPU but they're relatively small and do more to help prefetch than to retain data. Textures also include things like normal maps, light maps, etc. The thing with TBDRs is that while they save a lot of bandwidth with the render targets being tiled they need to get the entire scene queued up before rendering begins. This means the vertex data will never realistically stay in FIFOs, and because of the tiling will have to be accessed multiple times. So there's a tradeoff in increased bandwidth associated with geometry.

And of course the CPU itself needs some of the bandwidth at the same time.

In practice it won't be possible to saturate the full bandwidth even with a very good memory controller, some overhead will be left on the table for things like refreshes, so don't count on the entire rated value being available.
 
Lets just say that the maximum directly addressable memory on a 32-Bit system is 4 GB.
Nah, that's a problem that was solved ages ago. A single process, with its 32 bit pointer, can only address up to 4GB, but each process can be given separate pages for up to 4GB each. The hardware has supported extended pages, upwards of 36 bits (or 64GB of physical RAM), for at least 20 years: see PAE. This is all handled by the MMU so there's no cost in performance.
Microsoft consistently chose not to support this in order to upsell their OS (or for stability reasons, as they claim), but Linux has no such restriction.
 
Lets just say that the maximum directly addressable memory on a 32-Bit system is 4 GB. If you go beyond that, you find bugs, lizards and dragons ....

More seriously:
It simply introduces more complexity and strange bugs if you can not address all the memory with the native width of the CPU. Most of the time it is no problem, but once you hit the boundary(a developper never does because his system rarly uses more that 4GB) you come across a lot of rather strange bugs(not bugs in your code, but bugs in other peoples code that make no sense to them and you whatsoever). It is hard to describe, but every book/paper/article that says otherwise lies.

Jörg

Please describe in detail. In theory the MMU should just handle it fine. Where are the problems?
 
In response to the latest bout of nonsense from a self confessed idiot in this post in the media & press thread.

You DEMANDED that a 4GB version be evaluated and offered. You made very loud and public claims that 2GB was inadequate.

Nope, I REQUESTED that a 4GB version be evaluated should it be possible to do so in the belief that this option would likely generate additional sales / profits for the project. I explained why I personally might be willing to pay extra for a 4GB version depending upon the impact on performance, retail costs & battery life.

Now you are DEMANDING, in the NEWS forum, that a prototype board run, at a cost in the range of 15.000 EUR, be executed to validate the hardware spec. Just scheduling that can take a month. This is a like-part substitution on the board spec. It should not require a new prototype run.

Nope, I'm SUGGESTING in the GENERAL DISCUSSION forum that the pre order FAQ could be improved by in stead of saying 'It's not 100% clear that we'll create a 4GB RAM version, so if we won't produce one, you'll (FILL IN THE BLANK) the 2GB version.' It would be better to explain the reason why it's not 100% clear and let people know when it will become 100% clear.

Is it your goal in life to add confusion and skepticism? Are you trying to derail the project? What you're writing into this news forum is going to be among the first stop for reporters to scan community reactions.

Nope, but if I see a potential problem / missed opportunity I'm very happy to mention it in the belief that if something that could easily be changed was changed it could benefit the project.

You are making a proverbial mountain out of a mole hill again. Last time they obliged your demands because the parts became available to do it. Now you're demanding that the project derail by a month and spend 15.000 EUR unnecessarily to satisfy your concerns. Those concerns are because of changes that YOU demanded in the first place.

Nope, I'm suggesting that ED let people know when he plans to produce & test (for overall functionality and impacts on performance and battery life) a board populated with 4GB of Ram, this will be done some time & people who have ordered a 4GB version, or those undecided between 4GB & 2GB, might be interested in knowing when this is likely to be.
 
In response to the latest bout of nonsense from a self confessed idiot in this post in the media & press thread.



Nope, I REQUESTED that a 4GB version be evaluated should it be possible to do so in the belief that this option would likely generate additional sales / profits for the project. I explained why I personally might be willing to pay extra for a 4GB version depending upon the impact on performance, retail costs & battery life.



Nope, I'm SUGGESTING in the GENERAL DISCUSSION forum that the pre order FAQ could be improved by in stead of saying 'It's not 100% clear that we'll create a 4GB RAM version, so if we won't produce one, you'll (FILL IN THE BLANK) the 2GB version.' It would be better to explain the reason why it's not 100% clear and let people know when it will become 100% clear.



Nope, but if I see a potential problem / missed opportunity I'm very happy to mention it in the belief that if something that could easily be changed was changed it could benefit the project.



Nope, I'm suggesting that ED let people know when he plans to produce & test (for overall functionality and impacts on performance and battery life) a board populated with 4GB of Ram, this will be done some time & people who have ordered a 4GB version, or those undecided between 4GB & 2GB, might be interested in knowing when this is likely to be.
Honestly Pal, does it really worth to be aggressive? I'm not sure it needs to insult people, even if you don't like each other, actually from your both points of view, both are right.
 
Back
Top