the open pandora 2 must use an amd apu!


Regardless, there isn't an intrinsic reason any of that has to go off-tile, if you have enough tile memory to support each render target, or it's known that one target is no longer being used and can be released mid-scene (the texture cache should also be capable of hitting the tile memory). That's not to say that's how it actually works though.

PVR tile memory is just a single tiny tile per core (32x32 iirc).


it gets flushed to RAM and reused as the frame buffer is rendered tile-by-tile.


the whole point of tile based rendering is that you only need 1 tile on die.


afaik none of the tile-based-rendering GPUs hold more than one tile per core.


that wouldn't make for a big texture and then you couldnt render anything since the tile was filled with the texture.


its not like a 360 with 10MB eDRAM "inside" the GPU.
 
You're still not getting it. None of what you described requires more than one 32x32 tile to be rendered at a time, with every pass staying on tile. There's no rule that a tile can only handle one draw call or even one render target nor is there a rule that it has to resolve to the framebuffer to do render to texture. It just has to have enough tile memory for every active render target.


This is invalidated if there's something that really does require a resolve, like reading the framebuffer into CPU visible RAM in between passes.. but I'm assuming you're not thinking something like this.
 
We're not using an x86 CPU. So - lets go back to the off-topic conversation around electric and electric hybrid vehicles.
 
You're still not getting it. None of what you described requires more than one 32x32 tile to be rendered at a time, with every pass staying on tile. There's no rule that a tile can only handle one draw call or even one render target nor is there a rule that it has to resolve to the framebuffer to do render to texture. It just has to have enough tile memory for every active render target.


This is invalidated if there's something that really does require a resolve, like reading the framebuffer into CPU visible RAM in between passes.. but I'm assuming you're not thinking something like this.

bloom requires a blurred version of the scene, which mean accessing neighbor tiles.


realtime shadows require rendering the scene as depth-only texture from the point of view of the light, it is not rendered in camera-space and so it does not match render tiles of the final render.


tile at (0,0) in the light-space depth-only render pass can be anywhere on or off the screen and can have any orientation and projection unrelated to the camera.


http://en.wikipedia.org/wiki/Shadow_mapping


it needs to be flushed from render tile into a texture.
 
Steam power ftw! :rolleyes:

"In 1974, Saab-Scania Sweden’s Saab Car Division began developing what is known as the “Saab Nine Cylinder Axial Steam Engine” also known as project ULF."


http://www.saabhisto...b-steam-engine/


or VW's 1.0L 220hp (164 kW) 369lbs of torque steam car http://en.wikipedia.org/wiki/Steam_car#Enginion_Steamcell


(in comparison, my van with its 7.3L turbo-diesel barely generates 250hp, 425lbs of torque)


and we haven't mentioned nuclear-powered cars yet :)
 
Last edited by a moderator:
Ahah! That explains why Steam requires an x86 processor to run: it needs the excess heat energy.
 
bloom requires a blurred version of the scene, which mean accessing neighbor tiles.

Yeah okay, I'll give you that.

realtime shadows require rendering the scene as depth-only texture from the point of view of the light, it is not rendered in camera-space and so it does not match render tiles of the final render.

tile at (0,0) in the light-space depth-only render pass can be anywhere on or off the screen and can have any orientation and projection unrelated to the camera.


http://en.wikipedia..../Shadow_mapping


it needs to be flushed from render tile into a texture.

Granted too, however since you only need the depth buffer the bandwidth is lower than a full scene target. Particularly if there's depth compression, which a TBDR could be better at handling.


Barring that, shadow volumes wouldn't have this problem.
 
I'm very surprised to read myself posting this, but I can see the point of a cut-down x86 chip.


There is a problem with who will support pandora Linux if/when notaz gives up


Using x86 with intel graphics would allow us to run more recent versions of windows (for them as likes)) mainstream linux and reduce the ammount of porting necessary.


If the power consumption is not exorbitant (it seems to be getting better) this would an interesting solution...
 
Using x86 with intel graphics would allow us to run more recent versions of windows (for them as likes)) mainstream linux and reduce the ammount of porting necessary.

porting (aside from the kernel) should already be unnecessary but we got this weird PND system since we're running off a tiny internal filesystem rather than running straight off one of the SD cards.


there would also be a lot less porting needed if the gaming control /dev interface was like a regular PC USB gamepad and not a separate device for each of (keyboard+buttons), (left nub) and (right nub)


PandaBoard and BeagleBoard runs regular linux distributions off an SD card and its not different from running on x86.


you can use opk, apt-get, or synaptic, or whatever nice package manager.


no wasted time and efforts on getting PND to work, no need for a special Apps web site, just a regular package repository on a regular linux distribution with ton of support from the larger ARM Linux community.


no need to do special flashing of the firmware, just update thru the normal linux distribution channels which already support notification of new versions and updates.


the internal NAND should just have a minimal system that lets you install to an SD card a choice of working regular linux distributions.
 
For the next Pandora I would say an ARM based SOC would be the way to go . I believe the SOC options in the timeframe we are talking about , should mean an ARM based SOC will still be the best option at the time.


Intel are only really now starting to make some serious inroads with their Mobile SOCs ,one day , probably 3 years from now they will have a SOC that might stack up for performance and battery life, but until then it's ARM in my opinion.


One of the things that has amazed me is the time it has taken Intel to start looking at the mobile arena more seriously. It's almost like they thought it wouldn't affect them. Now that tablets are starting to outsell laptops they must be shitting themselves with the realisation of the mistake they have made in not getting into the mobile SOC arena earlier. I would certainly expect a major push by them and a decent SOC , but not in the timeframe for a P2.
 
Last edited by a moderator:
Back
Top