5" screen (on a P2)


If we could get them, I'd want a 1920*1080 LCD.
What a waste, you'd have to push ~250MB/s to update such thing at 60Hz even at 16bpp, slowing everything down on the system and wasting lots of power.

I remember the same arguments when we went with 800*480 on the Pandora, there was a lengthy thread about how it was a bad idea. I'm pretty sure we're all in agreement now that the Pandora LCD/resolution was a good move looking back?


You have got to design as far ahead as feasibly possible or the P2 will be dated before it even launches.


I'd like to explain more about the P2 design and why this screen is very important to it, but we're not planning to do that until 2013, but all will be explained and it will make sense when you see it.
 
Last edited by a moderator:
Are there ways to have the extra screen resolution, but to use some kind of passive pixel doubling to save GPU load?
 
Who was opposing 800x480 for the Pandora's display back then? When the alternative was what, 320x240? 480x272? The choice was obvious. The resolution was already commodity. It was on PMPs and MIDs for ages. Nokia had been using it in their internet devices for years. I don't remember this anti-800x480 thread, but whoever was against it was probably a misinformed minority, or was more against the size or aspect ratio than the number of pixels.


So you want to design something future proof, fair enough. Why exactly would you believe that a 5" 1080p screen is going to be obsoleted within Pandora 2's lifetime, or ever for that matter? Not every device metric is going to keep improving forever. In fact, Apple's entire retina marketing campaign revolves around their resolutions being the ideal and best you'll ever need. This centers around a magic 300ppi number for handheld devices. So is 440ppi really going to be first demanded then not good enough?


Yeah we're suddenly seeing big resolution bumps, mainly driven by Apple. Part of Apple's resolution strategy is to double the vertical and horizontal resolution in order to allow an easy and nice looking transition between their old and new graphics. When they do this the rest of the industry isn't so quick to copy their resolutions. They aim somewhat lower, because there's no fundamental need to go so high for them. We've seen some phones beat iPhone 4 (1.5 years old now) in resolution but usually with a much bigger screen. Its pixel density has remained close to uncontested for phones.


The best thing you can do to keep Pandora from being too dated when it launches is to launch the thing on time. Pandora 1 was about the best it could have been back in 2008 (which still wasn't the best on the market, but oh well). Its problem wasn't aiming the bar too low, it was taking forever to be released.

Are there ways to have the extra screen resolution, but to use some kind of passive pixel doubling to save GPU load?

Yes, SoCs usually have display controllers that can perform scaling (doesn't necessarily have to be integer, can have filtering, or at least this is true for Pandora's OMAP3530 which is old tech now). The only additional cost then is between the display controller and the display.
 
Last edited by a moderator:
I'd like to explain more about the P2 design and why this screen is very important to it, but we're not planning to do that until 2013, but all will be explained and it will make sense when you see it.
And Notaz will be one of the first people to know about it as it develops if he is interested. :)


The 1080p screen sure has the wow factor but thinking about it some more does lead me to think a 720p screen would be more suitable. I suppose it depends on what can be purchased in small quantities.
 
If we could get them, I'd want a 1920*1080 LCD.
What a waste, you'd have to push ~250MB/s to update such thing at 60Hz even at 16bpp, slowing everything down on the system and wasting lots of power.
+1


and that's just to WRITE to the screen, you need to read it from memory too!


and you need to draw the screen first!!!


you have to count at the very least 32x that ( 8GB/sec ) to render a 3D scene at this resolution.


a single 64bpp render buffer (32bit RGBA + 24 bit depth + 8bit stencil buffer) at 60fps will need 1GB/sec just to clear the frame buffer.


even a lot of XBox360 and PS3 games render their 3D scene at 720p, upscale it to 1080p then add the HUD at 1080p because there just isn't enough fillrate to do full resolution 3D.
 
I'm pretty confident that 1920*1080 will be the the limit on smart phone resolutions for the foreseeable future, maybe even for 6-10 years, because, as you say, it's really pointless to go any further as you can't really see the difference. I'd expect 1080 to be replaced with foldable screens or some other not yet available tech.


I expect things like the Galaxy4 to be 1920*1080 in 2013, consider that the Galaxy3 is already 1280*720.


So we might as well plan for this and include it so we are already up there with the other tech in 2013.


You don't have to render to the LCD at that res, you can always half it or do whatever you like.
 
Last edited by a moderator:
What a waste, you'd have to push ~250MB/s to update such thing at 60Hz even at 16bpp, slowing everything down on the system and wasting lots of power.
And I'm sure it costs alot of CPU power to display all the (retro) games in such a big res, games, that never made for such a res. I know that LCD displays can do HW scaling but I guess you can never be sure that this doesn't cost any CPU/GPU power. Besides, I even can't read stuff on the tiny Pandora Screen, imagine a XFCE Desktop in full HD on a 5" screen. Magnifieres have to be included to the Pandora 2 I'm sure. :D

The best thing you can do to keep Pandora from being too dated when it launches is to launch the thing on time. Pandora 1 was about the best it could have been back in 2008 (which still wasn't the best on the market, but oh well). Its problem wasn't aiming the bar too low, it was taking forever to be released.
This. Time is money and as faster the P2 will hit the marked, as more reliable is the Device and as sooner the Team can make profit, easy math. :)


Basicly the only reason, why Pandora 2 is so expencive now, is the massive delay and problems the 3rd party companies made. Its bad to throw these additional costs to the customers and not back to these incapable companies itself but at least OP Team can learn from this and make better decissions for the Pandora 2.


No need to use HW that is "years ahead" (and expencive) when the Pandora 2 hits the Market in time, say 6 Months from planing stage to mass production.
 
It won't be 6 months, it will be at least a year from now. Possibly Christmas 2013.


But it's a design which is really nice. The LCD decision will make sense when you understand how the P2 works.
 
Craigix said:
I expect things like the Galaxy4 to be 1920*1080 in 2013, consider that the Galaxy3 is already 1280*720.

So we might as well plan for this and include it so we are already up there with the other tech in 2013.
Guess we'll find out. My money is on it being 1280x720 again, but this time with full subpixel resolution. Maybe.

you have to count at the very least 32x that ( 8GB/sec ) to render a 3D scene at this resolution.

a single 64bpp render buffer (32bit RGBA + 24 bit depth + 8bit stencil buffer) at 60fps will need 1GB/sec just to clear the frame buffer.

Most mobile SoCs use tiling, where only the framebuffer has to be written to and only once, unless several passes can't be handled within the same tile for some reason. This is often done in conjunction with dithering to a lower bit depth (like you were advocating before). So it's nowhere near as extreme as you were saying.. how did you get 32x anyway? That's some really extreme overdraw or several passes..
 
you have to count at the very least 32x that ( 8GB/sec ) to render a 3D scene at this resolution.

a single 64bpp render buffer (32bit RGBA + 24 bit depth + 8bit stencil buffer) at 60fps will need 1GB/sec just to clear the frame buffer.

Most mobile SoCs use tiling, where only the framebuffer has to be written to and only once, unless several passes can't be handled within the same tile for some reason. This is often done in conjunction with dithering to a lower bit depth (like you were advocating before). So it's nowhere near as extreme as you were saying.. how did you get 32x anyway? That's some really extreme overdraw or several passes..

tiling or not, the processing still has to be done internally, so the GPU would still need about 8GB/sec internal.


1x is just to clear the internal tiles.


at least 2x for particles/alpha (2x worth of overdraw on particles is common, including water surface)


post-processing effect (like bloom, depth of field, etc) will easily add another 3x worth of processing (dump, read, write)


and add about 1x overdraw worth for rendering a shadow/light map in order to cast 1 single set of dynamic shadows on the whole scene.


you'll need another 1x worth to read the result from the internal tile and dump the result to RAM.


I'm not even including multiple shadow-casting lights, just a single sun.


so that's about 8x worth of screen sized processing.


unless all you want is 1995-era graphic quality.


talking from experience, for modern 3D graphics you need around 8x the render buffer's worth in pixel fill rate.


a 2D game is just as bad since every sprite and BG layers are alpha cut-out, with particles and text ( damage display, combo counters, etc) they often end up more demanding on mobile GPUs than 3D games due to the heavy texture fetches and huge amount of overdraw with alpha.
 
tiling or not, the processing still has to be done internally, so the GPU would still need about 8GB/sec internal.

We're talking on-chip SRAM on the order of dozens of KB for tiling. Whatever its internal bandwidth is it's probably going to be matched to be able to handle whatever the peak pixel output rate is. The tiles also may be double buffered so it can simultaneously resolve one to RAM while you're updating the next one. And may have separate memories/interfaces for render target + depth buffer. And perhaps compression so that you don't need to write zeros to clear the buffers (a single bit to tell if it's zero or not would be sufficient). And of course in the case of multi-core (like SGX543MP4 on iPad 3 and PSVita) you've got several tile engines with internal tile memory working in parallel..


Point is in reference to what notaz said, if you want to have a 1920x1080 desktop it'll use a lot more resources regardless of what you're doing on the screen. But with games you have a lot of flexibility to render to some other target resolution, the LCD resolution doesn't force anything.. Infinity Blade on iPad 3 for instance doesn't render to anywhere close to the full resolution. And of course 60FPS at that resolution is even less realistic, hell, I doubt most phone games target 60FPS for typical hardware to begin with. They probably go for something like 30.


Of course that doesn't mean these GPUs will easily have the fillrate to handle any kind of complexity at 1920x1080p. Even the current consoles usually don't, like you say.
 
Last edited by a moderator:
If we could get them, I'd want a 1920*1080 LCD.
What a waste, you'd have to push ~250MB/s to update such thing at 60Hz even at 16bpp, slowing everything down on the system and wasting lots of power.
I don't know what the actual usage will be like but OMAP5 (and even starting with OMAP4) has really upped the pipeline for memory bandwidth. They are designing these to run at 1080p. Again, I haven't been able to do anything with OMAP4 (never did get my Pandaboard up and running a while back due to issues on my Linux environment that I never went back to). The new 4470 chip has dual-channel LPDDR2 at 466MHz and a listed memory bandwidth above 5.2GB/s at 70% efficiency. OMAP5 clocks at 532MHz and gives the rating of 8.5GB/s. They are designed to run three LCDs at the same time as well as one HDMI port so that must be some serious throughput considerations in the design. Samsung's Exynos chip has a much higher bandwidth rating using DDR3 at 12.8GB/s (but doesn't DDR3 have more latencies and stuff?).


I think these chips are a leap forward beyond anything we are used to expecting. There would be a bit more power consumption with more data running around the bus but these chips are going to be used to blast through some pretty intensive software all the time anyway. I don't see anyone buying a P2 just to play Super Nintendo. :) OMAP5 at 28nm and Exynos 5 at 32nm, plus all the efforts in gating and power saving internally should still come out being similar in power profile compared to what we see on current chips.


What do you think? I can still see how 720p would be plenty for a 5" screen and 1080p is overkill (in 2012), but if the chips can actually soar through them without problems, I'd love to experiment with both. A 1080p panel would make software compatibility 100% because the P2 is going to support 1080p to a monitor for sure. That's not even optional going into 2013.
 
Last edited by a moderator:
If we could get them, I'd want a 1920*1080 LCD.
What a waste, you'd have to push ~250MB/s to update such thing at 60Hz even at 16bpp, slowing everything down on the system and wasting lots of power.
I don't know what the actual usage will be like but OMAP5 (and even starting with OMAP4) has really upped the pipeline for memory bandwidth. They are designing these to run at 1080p. Again, I haven't been able to do anything with OMAP4 (never did get my Pandaboard up and running a while back due to issues on my Linux environment that I never went back to). The new 4470 chip has dual-channel LPDDR2 at 466MHz and a listed memory bandwidth above 5.2GB/s at 70% efficiency. OMAP5 clocks at 532MHz and gives the rating of 8.5GB/s. They are designed to run three LCDs at the same time as well as one HDMI port so that must be some serious throughput considerations in the design. Samsung's Exynos chip has a much higher bandwidth rating using DDR3 at 12.8GB/s (but doesn't DDR3 have more latencies and stuff?).


I think these chips are a leap forward beyond anything we are used to expecting. There would be a bit more power consumption with more data running around the bus but these chips are going to be used to blast through some pretty intensive software all the time anyway. I don't see anyone buying a P2 just to play Super Nintendo. :) OMAP5 at 28nm and Exynos 5 at 32nm, plus all the efforts in gating and power saving internally should still come out being similar in power profile compared to what we see on current chips.


What do you think? I can still see how 720p would be plenty for a 5" screen and 1080p is overkill (in 2012), but if the chips can actually soar through them without problems, I'd love to experiment with both. A 1080p panel would make software compatibility 100% because the P2 is going to support 1080p to a monitor for sure. That's not even optional going into 2013.

Reminds me of...

https://www.youtube.com/embed/7UD1VCJvoDQ?feature=oembed
Start around 42 seconds in.


And...


Like the sign says, "speed's just a question of money. How fast you wanna go?"


Mad Max http://www.imdb.com/title/tt0079501/quotes


How do those chips feel about booting from and having multiple channels of USB 3.0 running 5Gbit/sec?
 
A further thought, forget the Galaxy4, what resolution is the Iphone5 going to be? If they continue their trend I think it could easily be 5" 1920*1280.
 
Last edited by a moderator:
I'd bet hard money that they do NOT quadruple the resolution again for iPhone 5. Can you imagine that advertising for such a thing? "First we released the revolutionary retina display for iPhone 4, with pixels so small they're at the limit of what the human eye can resolve. Now we've released something four times that. That's right, four times better than your puny human eyes can differentiate. The iPhone 5: See your own inferiority."


Also note that Apple has kept the phone size the same through every iteration and iPhone fans pretty staunchly side by this vs the trend towards bigger screens in other phones. Are they really going to change that now? Keeping it 3.5" makes the idea of 1920x1280 even less realistic.
 
Last edited by a moderator:
But how will they compete with the G3 resolution?


I'm only going on the rumour sites but quite a few bits of reasonably convincing evidence from different sources suggests the screen is bigger on the iPhone5. Maybe that is how they will justify it.
 
How would a 3.5" 960x640 display true LCD compete with a 1280x720 4.8" pentile OLED (that only has half the blue and red sub-pixels)?


Probably just fine.


Rumor sites were pretty damn certain iPad 3 would have a quad-core Cortex-A9 too..
 
Larger phones are acceptable now if they're thin with minimal bezel. It's all about the screen.


Exo - I get it. You like Apple stuff. That's OK - enjoy them.


"half the blue and red sub-pixels" is only half the story though. The real story there is "double the green sub-pixels". If you disdain anything non-Apple, which is hard to believe since you're here, then you wouldn't believe it - but the Samsung GIII screen, side by side with the Apple Pentile and both showing the same level of zoom on a high resolution picture, the GIII screen is better looking.


If you take one of these ultra-high resolution screens and turn on just 3 pixels, turn them off then turn on 4 pixels, most people won't be able to see the difference. However, if you put a highly detailed image on a normal LCD next to a 300ppi+ display, the differences are startling.


The argument of 'the human eye can't tell the difference' is flawed. It isn't the eye that is the subject. It is the brain.
 
Wow, Grench, I don't know what would cause you to respond that way. I have never owned an Apple product in my life so I think you actually don't get it at all. All I was saying is that if you're comparing resolutions you need to point out that Pentile has half the green and blue sub-pixels vs its advertised resolution, as opposed to other displays (in this case Apple's, since Craig was asking how they could compete) which don't. No, it is not correct to say that it has double the green sub-pixels, because no one is calling it (or anything else) a 640x720 display.


And whether or not Galaxy S3's screen otherwise looks better is immaterial. We're talking about pixel density here. Craig says that Apple will need to respond with more pixel density (4x more even!) in order to compete, not that they'll simply need to otherwise improve the display. I was arguing that Apple already has the higher pixel density and I don't see why they'd feel especially pressured to go even higher, especially if they have to do it by quadrupling.


Whether or not Apple's retina argument holds water it's still their argument, and you wouldn't expect them to immediately contradict it. But since Apple is already using a > 300ppi display your "vs normal LCD" comparison is kind of hard to understand..? Isn't the comparison > 300ppi vs something outrageous like 500ppi?
 
Last edited by a moderator:
Back
Top